[2025-05-07 16:30:58,651] INFO Kafka Connect worker initializing ... (org.apache.kafka.connect.cli.AbstractConnectCli:114) [2025-05-07 16:30:58,657] INFO WorkerInfo values: jvm.args = -Xms4G, -Xmx8G, -XX:MetaspaceSize=256m, -XX:+UseG1GC, -XX:+UseG1GC, -XX:MaxGCPauseMillis=20, -XX:InitiatingHeapOccupancyPercent=35, -XX:+ExplicitGCInvokesConcurrent, -XX:MaxInlineLevel=15, -Djava.awt.headless=true, -Dcom.sun.management.jmxremote=true, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Dkafka.logs.dir=/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../logs, -Dlog4j.configuration=file:bin/../config/connect-log4j.properties jvm.spec = Oracle Corporation, Java HotSpot(TM) 64-Bit Server VM, 17.0.7, 17.0.7+8-LTS-224 jvm.classpath = /home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/activation-1.1.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/argparse4j-0.7.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/audience-annotations-0.12.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/caffeine-2.9.3.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/commons-beanutils-1.9.4.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/commons-cli-1.4.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/commons-collections-3.2.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/commons-digester-2.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/commons-io-2.14.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/commons-lang3-3.12.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/commons-logging-1.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/commons-validator-1.7.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/connect-api-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/connect-basic-auth-extension-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/connect-json-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/connect-mirror-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/connect-mirror-client-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/connect-runtime-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/connect-transforms-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/error_prone_annotations-2.10.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/hk2-api-2.6.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/hk2-locator-2.6.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/hk2-utils-2.6.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jackson-annotations-2.16.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jackson-core-2.16.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jackson-databind-2.16.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jackson-dataformat-csv-2.16.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jackson-datatype-jdk8-2.16.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jackson-jaxrs-base-2.16.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jackson-jaxrs-json-provider-2.16.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jackson-module-afterburner-2.16.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jackson-module-jaxb-annotations-2.16.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jackson-module-scala_2.13-2.16.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jakarta.activation-api-1.2.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jakarta.inject-2.6.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jakarta.xml.bind-api-2.3.3.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/javassist-3.29.2-GA.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/javax.activation-api-1.2.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/javax.annotation-api-1.3.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/javax.servlet-api-3.1.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jaxb-api-2.3.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jersey-client-2.39.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jersey-common-2.39.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jersey-container-servlet-2.39.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jersey-container-servlet-core-2.39.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jersey-hk2-2.39.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jersey-server-2.39.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jetty-client-9.4.56.v20240826.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jetty-continuation-9.4.56.v20240826.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jetty-http-9.4.56.v20240826.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jetty-io-9.4.56.v20240826.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jetty-security-9.4.56.v20240826.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jetty-server-9.4.56.v20240826.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jetty-servlet-9.4.56.v20240826.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jetty-servlets-9.4.56.v20240826.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jetty-util-9.4.56.v20240826.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jetty-util-ajax-9.4.56.v20240826.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jline-3.25.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jopt-simple-5.0.4.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jose4j-0.9.4.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/jsr305-3.0.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/kafka_2.13-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/kafka-clients-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/kafka-group-coordinator-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/kafka-group-coordinator-api-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/kafka-metadata-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/kafka-raft-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/kafka-server-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/kafka-server-common-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/kafka-shell-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/kafka-storage-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/kafka-storage-api-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/kafka-streams-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/kafka-streams-examples-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/kafka-streams-scala_2.13-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/kafka-streams-test-utils-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/kafka-tools-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/kafka-tools-api-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/kafka-transaction-coordinator-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/lz4-java-1.8.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/maven-artifact-3.9.6.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/metrics-core-2.2.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/metrics-core-4.1.12.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/netty-buffer-4.1.111.Final.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/netty-codec-4.1.111.Final.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/netty-common-4.1.111.Final.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/netty-handler-4.1.111.Final.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/netty-resolver-4.1.111.Final.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/netty-transport-4.1.111.Final.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/netty-transport-classes-epoll-4.1.111.Final.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/netty-transport-native-epoll-4.1.111.Final.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/netty-transport-native-unix-common-4.1.111.Final.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/opentelemetry-proto-1.0.0-alpha.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/paranamer-2.8.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/pcollections-4.0.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/plexus-utils-3.5.1.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/protobuf-java-3.25.5.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/reflections-0.10.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/reload4j-1.2.25.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/rocksdbjni-7.9.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/scala-collection-compat_2.13-2.10.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/scala-java8-compat_2.13-1.0.2.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/scala-library-2.13.14.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/scala-logging_2.13-3.9.5.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/scala-reflect-2.13.14.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/slf4j-api-1.7.36.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/slf4j-reload4j-1.7.36.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/snappy-java-1.1.10.5.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/swagger-annotations-2.2.8.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/trogdor-3.9.0.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/zookeeper-3.8.4.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/zookeeper-jute-3.8.4.jar:/home/zyadmin/app/kafka/kafka_2.13-3.9.0/bin/../libs/zstd-jni-1.5.6-4.jar os.spec = Linux, amd64, 3.10.0-1160.95.1.el7.x86_64 os.vcpus = 4 (org.apache.kafka.connect.runtime.WorkerInfo:72) [2025-05-07 16:30:58,660] INFO Scanning for plugin classes. This might take a moment ... (org.apache.kafka.connect.cli.AbstractConnectCli:120) [2025-05-07 16:30:58,715] INFO Loading plugin from: /home/zyadmin/app/plugin/kafka-connect-custom (org.apache.kafka.connect.runtime.isolation.PluginScanner:76) [2025-05-07 16:30:59,090] INFO Registered loader: PluginClassLoader{pluginLocation=file:/home/zyadmin/app/plugin/kafka-connect-custom/} (org.apache.kafka.connect.runtime.isolation.PluginScanner:81) [2025-05-07 16:30:59,092] INFO Loading plugin from: /home/zyadmin/app/plugin/debezium-connector-jdbc (org.apache.kafka.connect.runtime.isolation.PluginScanner:76) [2025-05-07 16:30:59,198] INFO Using up-to-date JsonConverter implementation (io.debezium.converters.CloudEventsConverter:120) [2025-05-07 16:30:59,297] INFO Registered loader: PluginClassLoader{pluginLocation=file:/home/zyadmin/app/plugin/debezium-connector-jdbc/} (org.apache.kafka.connect.runtime.isolation.PluginScanner:81) [2025-05-07 16:30:59,581] INFO Loading plugin from: /home/zyadmin/app/plugin/debezium-connector-oracle (org.apache.kafka.connect.runtime.isolation.PluginScanner:76) [2025-05-07 16:30:59,804] INFO Using up-to-date JsonConverter implementation (io.debezium.converters.CloudEventsConverter:120) [2025-05-07 16:30:59,870] INFO Registered loader: PluginClassLoader{pluginLocation=file:/home/zyadmin/app/plugin/debezium-connector-oracle/} (org.apache.kafka.connect.runtime.isolation.PluginScanner:81) [2025-05-07 16:30:59,988] INFO Loading plugin from: /home/zyadmin/app/plugin/debezium-connector-mysql (org.apache.kafka.connect.runtime.isolation.PluginScanner:76) [2025-05-07 16:31:00,109] INFO Using up-to-date JsonConverter implementation (io.debezium.converters.CloudEventsConverter:120) [2025-05-07 16:31:00,198] INFO Registered loader: PluginClassLoader{pluginLocation=file:/home/zyadmin/app/plugin/debezium-connector-mysql/} (org.apache.kafka.connect.runtime.isolation.PluginScanner:81) [2025-05-07 16:31:00,224] INFO Loading plugin from: classpath (org.apache.kafka.connect.runtime.isolation.PluginScanner:76) [2025-05-07 16:31:00,233] INFO Registered loader: jdk.internal.loader.ClassLoaders$AppClassLoader@251a69d7 (org.apache.kafka.connect.runtime.isolation.PluginScanner:81) [2025-05-07 16:31:00,234] INFO Scanning plugins with ServiceLoaderScanner took 1519 ms (org.apache.kafka.connect.runtime.isolation.PluginScanner:71) [2025-05-07 16:31:00,242] INFO Loading plugin from: /home/zyadmin/app/plugin/kafka-connect-custom (org.apache.kafka.connect.runtime.isolation.PluginScanner:76) [2025-05-07 16:31:00,426] INFO Registered loader: PluginClassLoader{pluginLocation=file:/home/zyadmin/app/plugin/kafka-connect-custom/} (org.apache.kafka.connect.runtime.isolation.PluginScanner:81) [2025-05-07 16:31:00,427] INFO Loading plugin from: /home/zyadmin/app/plugin/debezium-connector-jdbc (org.apache.kafka.connect.runtime.isolation.PluginScanner:76) [2025-05-07 16:31:02,706] INFO Registered loader: PluginClassLoader{pluginLocation=file:/home/zyadmin/app/plugin/debezium-connector-jdbc/} (org.apache.kafka.connect.runtime.isolation.PluginScanner:81) [2025-05-07 16:31:02,716] INFO Loading plugin from: /home/zyadmin/app/plugin/debezium-connector-oracle (org.apache.kafka.connect.runtime.isolation.PluginScanner:76) [2025-05-07 16:31:03,815] INFO Registered loader: PluginClassLoader{pluginLocation=file:/home/zyadmin/app/plugin/debezium-connector-oracle/} (org.apache.kafka.connect.runtime.isolation.PluginScanner:81) [2025-05-07 16:31:03,816] INFO Loading plugin from: /home/zyadmin/app/plugin/debezium-connector-mysql (org.apache.kafka.connect.runtime.isolation.PluginScanner:76) [2025-05-07 16:31:04,043] INFO Registered loader: PluginClassLoader{pluginLocation=file:/home/zyadmin/app/plugin/debezium-connector-mysql/} (org.apache.kafka.connect.runtime.isolation.PluginScanner:81) [2025-05-07 16:31:04,043] INFO Loading plugin from: classpath (org.apache.kafka.connect.runtime.isolation.PluginScanner:76) [2025-05-07 16:31:05,548] INFO Registered loader: jdk.internal.loader.ClassLoaders$AppClassLoader@251a69d7 (org.apache.kafka.connect.runtime.isolation.PluginScanner:81) [2025-05-07 16:31:05,548] INFO Scanning plugins with ReflectionScanner took 5306 ms (org.apache.kafka.connect.runtime.isolation.PluginScanner:71) [2025-05-07 16:31:05,552] WARN One or more plugins are missing ServiceLoader manifests may not be usable with plugin.discovery=service_load: [ file:/home/zyadmin/app/plugin/debezium-connector-jdbc/ io.debezium.connector.jdbc.transforms.CollectionNameTransformation transformation 3.2.0.Alpha1 file:/home/zyadmin/app/plugin/debezium-connector-jdbc/ io.debezium.connector.jdbc.transforms.FieldNameTransformation transformation 3.2.0.Alpha1 file:/home/zyadmin/app/plugin/debezium-connector-oracle/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.0.Final file:/home/zyadmin/app/plugin/debezium-connector-jdbc/ io.debezium.transforms.VectorToJsonConverter transformation 3.2.0.Alpha1 file:/home/zyadmin/app/plugin/debezium-connector-mysql/ io.debezium.transforms.VectorToJsonConverter transformation 3.2.0.Alpha1 ] Read the documentation at https://kafka.apache.org/documentation.html#connect_plugindiscovery for instructions on migrating your plugins to take advantage of the performance improvements of service_load mode. To silence this warning, set plugin.discovery=only_scan in the worker config. (org.apache.kafka.connect.runtime.isolation.Plugins:123) [2025-05-07 16:31:05,553] INFO Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,553] INFO Added plugin 'org.apache.kafka.connect.transforms.Filter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,554] INFO Added plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,554] INFO Added plugin 'org.apache.kafka.connect.transforms.InsertField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,554] INFO Added plugin 'org.apache.kafka.connect.converters.DoubleConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,554] INFO Added plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,554] INFO Added plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,554] INFO Added plugin 'org.apache.kafka.connect.transforms.DropHeaders' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,554] INFO Added plugin 'com.your.debezium.transform.UppercaseNestedTransform' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,554] INFO Added plugin 'org.apache.kafka.connect.transforms.Cast$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,554] INFO Added plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,554] INFO Added plugin 'org.apache.kafka.connect.transforms.InsertHeader' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,555] INFO Added plugin 'io.debezium.converters.BinaryDataConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,555] INFO Added plugin 'org.apache.kafka.common.config.provider.DirectoryConfigProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,555] INFO Added plugin 'org.apache.kafka.connect.transforms.Flatten$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,555] INFO Added plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,555] INFO Added plugin 'org.apache.kafka.connect.transforms.HeaderFrom$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,555] INFO Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,555] INFO Added plugin 'io.debezium.transforms.ExtractNewRecordState' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,555] INFO Added plugin 'io.debezium.converters.ByteArrayConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,555] INFO Added plugin 'io.debezium.transforms.partitions.PartitionRouting' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,555] INFO Added plugin 'io.debezium.connector.jdbc.JdbcSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,556] INFO Added plugin 'org.apache.kafka.connect.transforms.predicates.TopicNameMatches' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,556] INFO Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,556] INFO Added plugin 'io.debezium.transforms.outbox.EventRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,556] INFO Added plugin 'io.debezium.transforms.VectorToJsonConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,556] INFO Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,556] INFO Added plugin 'org.apache.kafka.connect.converters.IntegerConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,556] INFO Added plugin 'org.apache.kafka.connect.transforms.predicates.RecordIsTombstone' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,556] INFO Added plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,557] INFO Added plugin 'io.debezium.connector.mysql.MySqlConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,557] INFO Added plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,557] INFO Added plugin 'org.apache.kafka.connect.transforms.Cast$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,557] INFO Added plugin 'org.apache.kafka.connect.transforms.Flatten$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,557] INFO Added plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,557] INFO Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,557] INFO Added plugin 'com.your.debezium.transform.DefaultValueCleaner' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,557] INFO Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,557] INFO Added plugin 'org.apache.kafka.connect.converters.FloatConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,558] INFO Added plugin 'io.debezium.connector.oracle.OracleConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,558] INFO Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,558] INFO Added plugin 'io.debezium.transforms.HeaderToValue' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,558] INFO Added plugin 'io.debezium.transforms.SchemaChangeEventFilter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,558] INFO Added plugin 'org.apache.kafka.connect.transforms.TimestampRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,558] INFO Added plugin 'io.debezium.transforms.ExtractSchemaToNewRecord' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,558] INFO Added plugin 'io.debezium.transforms.ByLogicalTableRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,558] INFO Added plugin 'org.apache.kafka.connect.transforms.RegexRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,558] INFO Added plugin 'org.apache.kafka.connect.transforms.HoistField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,558] INFO Added plugin 'org.apache.kafka.connect.transforms.ValueToKey' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,559] INFO Added plugin 'org.apache.kafka.connect.converters.LongConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,559] INFO Added plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,559] INFO Added plugin 'com.your.debezium.transform.NestedFieldRenamer' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,559] INFO Added plugin 'io.debezium.connector.jdbc.transforms.ConvertCloudEventToSaveableForm' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,559] INFO Added plugin 'org.apache.kafka.connect.json.JsonConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,559] INFO Added plugin 'io.debezium.transforms.TimezoneConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,559] INFO Added plugin 'io.debezium.connector.jdbc.transforms.CollectionNameTransformation' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,559] INFO Added plugin 'org.apache.kafka.connect.transforms.HeaderFrom$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,559] INFO Added plugin 'org.apache.kafka.connect.transforms.predicates.HasHeaderKey' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,559] INFO Added plugin 'org.apache.kafka.connect.transforms.MaskField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,559] INFO Added plugin 'org.apache.kafka.common.config.provider.EnvVarConfigProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,560] INFO Added plugin 'io.debezium.transforms.ExtractChangedRecordState' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,560] INFO Added plugin 'io.debezium.connector.jdbc.transforms.FieldNameTransformation' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,560] INFO Added plugin 'org.apache.kafka.connect.storage.StringConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,560] INFO Added plugin 'io.debezium.transforms.tracing.ActivateTracingSpan' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,560] INFO Added plugin 'org.apache.kafka.connect.transforms.MaskField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,560] INFO Added plugin 'io.debezium.connector.mysql.rest.DebeziumMySqlConnectRestExtension' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,560] INFO Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,560] INFO Added plugin 'io.debezium.connector.mysql.transforms.ReadToInsertEvent' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,560] INFO Added plugin 'org.apache.kafka.connect.transforms.InsertField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,560] INFO Added plugin 'org.apache.kafka.connect.converters.BooleanConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,560] INFO Added plugin 'io.debezium.connector.oracle.rest.DebeziumOracleConnectRestExtension' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,561] INFO Added plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,561] INFO Added plugin 'io.debezium.converters.CloudEventsConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,561] INFO Added plugin 'org.apache.kafka.connect.transforms.HoistField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,561] INFO Added plugin 'org.apache.kafka.connect.converters.ShortConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:105) [2025-05-07 16:31:05,563] INFO Added alias 'VectorToJsonConverter' to plugin 'io.debezium.transforms.VectorToJsonConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,564] INFO Added alias 'RecordIsTombstone' to plugin 'org.apache.kafka.connect.transforms.predicates.RecordIsTombstone' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,564] INFO Added alias 'CloudEventsConverter' to plugin 'io.debezium.converters.CloudEventsConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,564] INFO Added alias 'String' to plugin 'org.apache.kafka.connect.storage.StringConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,564] INFO Added alias 'DebeziumOracle' to plugin 'io.debezium.connector.oracle.rest.DebeziumOracleConnectRestExtension' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,564] INFO Added alias 'MySql' to plugin 'io.debezium.connector.mysql.MySqlConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,564] INFO Added alias 'EnvVar' to plugin 'org.apache.kafka.common.config.provider.EnvVarConfigProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,564] INFO Added alias 'EnvVarConfigProvider' to plugin 'org.apache.kafka.common.config.provider.EnvVarConfigProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,564] INFO Added alias 'MirrorCheckpointConnector' to plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,564] INFO Added alias 'Boolean' to plugin 'org.apache.kafka.connect.converters.BooleanConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,564] INFO Added alias 'MySqlConnector' to plugin 'io.debezium.connector.mysql.MySqlConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,564] INFO Added alias 'HeaderToValue' to plugin 'io.debezium.transforms.HeaderToValue' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,565] INFO Added alias 'PartitionRouting' to plugin 'io.debezium.transforms.partitions.PartitionRouting' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,565] INFO Added alias 'Json' to plugin 'org.apache.kafka.connect.json.JsonConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,565] INFO Added alias 'StringConverter' to plugin 'org.apache.kafka.connect.storage.StringConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,565] INFO Added alias 'IntegerConverter' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,565] INFO Added alias 'Float' to plugin 'org.apache.kafka.connect.converters.FloatConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,565] INFO Added alias 'LongConverter' to plugin 'org.apache.kafka.connect.converters.LongConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,565] INFO Added alias 'DropHeaders' to plugin 'org.apache.kafka.connect.transforms.DropHeaders' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,565] INFO Added alias 'SimpleHeaderConverter' to plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,565] INFO Added alias 'ExtractSchemaToNewRecord' to plugin 'io.debezium.transforms.ExtractSchemaToNewRecord' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,565] INFO Added alias 'BinaryData' to plugin 'io.debezium.converters.BinaryDataConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,565] INFO Added alias 'DirectoryConfigProvider' to plugin 'org.apache.kafka.common.config.provider.DirectoryConfigProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,566] INFO Added alias 'ReadToInsertEvent' to plugin 'io.debezium.connector.mysql.transforms.ReadToInsertEvent' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,566] INFO Added alias 'ShortConverter' to plugin 'org.apache.kafka.connect.converters.ShortConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,566] INFO Added alias 'CloudEvents' to plugin 'io.debezium.converters.CloudEventsConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,566] INFO Added alias 'CollectionNameTransformation' to plugin 'io.debezium.connector.jdbc.transforms.CollectionNameTransformation' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,566] INFO Added alias 'DebeziumOracleConnectRestExtension' to plugin 'io.debezium.connector.oracle.rest.DebeziumOracleConnectRestExtension' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,566] INFO Added alias 'TimezoneConverter' to plugin 'io.debezium.transforms.TimezoneConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,566] INFO Added alias 'BasicAuthSecurityRestExtension' to plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,566] INFO Added alias 'Simple' to plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,566] INFO Added alias 'FieldNameTransformation' to plugin 'io.debezium.connector.jdbc.transforms.FieldNameTransformation' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,566] INFO Added alias 'AllConnectorClientConfigOverridePolicy' to plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,566] INFO Added alias 'ExtractNewRecordState' to plugin 'io.debezium.transforms.ExtractNewRecordState' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,567] INFO Added alias 'MirrorSource' to plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,567] INFO Added alias 'FieldName' to plugin 'io.debezium.connector.jdbc.transforms.FieldNameTransformation' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,567] INFO Added alias 'Directory' to plugin 'org.apache.kafka.common.config.provider.DirectoryConfigProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,567] INFO Added alias 'MirrorHeartbeat' to plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,567] INFO Added alias 'BooleanConverter' to plugin 'org.apache.kafka.connect.converters.BooleanConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,567] INFO Added alias 'HasHeaderKey' to plugin 'org.apache.kafka.connect.transforms.predicates.HasHeaderKey' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,567] INFO Added alias 'MirrorCheckpoint' to plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,567] INFO Added alias 'DefaultValueCleaner' to plugin 'com.your.debezium.transform.DefaultValueCleaner' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,567] INFO Added alias 'ExtractChangedRecordState' to plugin 'io.debezium.transforms.ExtractChangedRecordState' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,567] INFO Added alias 'OracleConnector' to plugin 'io.debezium.connector.oracle.OracleConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,567] INFO Added alias 'None' to plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,568] INFO Added alias 'TimestampRouter' to plugin 'org.apache.kafka.connect.transforms.TimestampRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,568] INFO Added alias 'Principal' to plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,568] INFO Added alias 'All' to plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,568] INFO Added alias 'JsonConverter' to plugin 'org.apache.kafka.connect.json.JsonConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,568] INFO Added alias 'DebeziumMySql' to plugin 'io.debezium.connector.mysql.rest.DebeziumMySqlConnectRestExtension' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,568] INFO Added alias 'JdbcSinkConnector' to plugin 'io.debezium.connector.jdbc.JdbcSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,568] INFO Added alias 'DebeziumMySqlConnectRestExtension' to plugin 'io.debezium.connector.mysql.rest.DebeziumMySqlConnectRestExtension' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,568] INFO Added alias 'JdbcSink' to plugin 'io.debezium.connector.jdbc.JdbcSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,568] INFO Added alias 'RegexRouter' to plugin 'org.apache.kafka.connect.transforms.RegexRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,568] INFO Added alias 'NoneConnectorClientConfigOverridePolicy' to plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,568] INFO Added alias 'Short' to plugin 'org.apache.kafka.connect.converters.ShortConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,568] INFO Added alias 'CollectionName' to plugin 'io.debezium.connector.jdbc.transforms.CollectionNameTransformation' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,569] INFO Added alias 'ByLogicalTableRouter' to plugin 'io.debezium.transforms.ByLogicalTableRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,569] INFO Added alias 'Double' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,569] INFO Added alias 'FileConfigProvider' to plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,569] INFO Added alias 'EventRouter' to plugin 'io.debezium.transforms.outbox.EventRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,569] INFO Added alias 'SchemaChangeEventFilter' to plugin 'io.debezium.transforms.SchemaChangeEventFilter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,569] INFO Added alias 'ConvertCloudEventToSaveableForm' to plugin 'io.debezium.connector.jdbc.transforms.ConvertCloudEventToSaveableForm' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,569] INFO Added alias 'Long' to plugin 'org.apache.kafka.connect.converters.LongConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,569] INFO Added alias 'File' to plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,569] INFO Added alias 'FloatConverter' to plugin 'org.apache.kafka.connect.converters.FloatConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,569] INFO Added alias 'ActivateTracingSpan' to plugin 'io.debezium.transforms.tracing.ActivateTracingSpan' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,569] INFO Added alias 'DoubleConverter' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,569] INFO Added alias 'NestedFieldRenamer' to plugin 'com.your.debezium.transform.NestedFieldRenamer' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,570] INFO Added alias 'BinaryDataConverter' to plugin 'io.debezium.converters.BinaryDataConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,570] INFO Added alias 'TopicNameMatches' to plugin 'org.apache.kafka.connect.transforms.predicates.TopicNameMatches' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,570] INFO Added alias 'MirrorHeartbeatConnector' to plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,570] INFO Added alias 'UppercaseNestedTransform' to plugin 'com.your.debezium.transform.UppercaseNestedTransform' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,570] INFO Added alias 'InsertHeader' to plugin 'org.apache.kafka.connect.transforms.InsertHeader' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,570] INFO Added alias 'Oracle' to plugin 'io.debezium.connector.oracle.OracleConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,570] INFO Added alias 'MirrorSourceConnector' to plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,570] INFO Added alias 'PrincipalConnectorClientConfigOverridePolicy' to plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,570] INFO Added alias 'ValueToKey' to plugin 'org.apache.kafka.connect.transforms.ValueToKey' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,570] INFO Added alias 'Integer' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,570] INFO Added alias 'Filter' to plugin 'org.apache.kafka.connect.transforms.Filter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:109) [2025-05-07 16:31:05,607] INFO DistributedConfig values: access.control.allow.methods = access.control.allow.origin = admin.listeners = null auto.include.jmx.reporter = true bootstrap.servers = [localhost:9092] client.dns.lookup = use_all_dns_ips client.id = config.providers = [] config.storage.replication.factor = 1 config.storage.topic = connect-configs connect.protocol = sessioned connections.max.idle.ms = 540000 connector.client.config.override.policy = All exactly.once.source.support = disabled group.id = connect-cluster header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter heartbeat.interval.ms = 3000 inter.worker.key.generation.algorithm = HmacSHA256 inter.worker.key.size = null inter.worker.key.ttl.ms = 3600000 inter.worker.signature.algorithm = HmacSHA256 inter.worker.verification.algorithms = [HmacSHA256] key.converter = class org.apache.kafka.connect.json.JsonConverter listeners = [http://:8083] metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 offset.flush.interval.ms = 10000 offset.flush.timeout.ms = 5000 offset.storage.partitions = 25 offset.storage.replication.factor = 1 offset.storage.topic = connect-offsets plugin.discovery = hybrid_warn plugin.path = [/home/zyadmin/app/plugin] rebalance.timeout.ms = 60000 receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 40000 response.http.headers.config = rest.advertised.host.name = null rest.advertised.listener = null rest.advertised.port = null rest.extension.classes = [] retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null scheduled.rebalance.max.delay.ms = 300000 security.protocol = PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS status.storage.partitions = 5 status.storage.replication.factor = 1 status.storage.topic = connect-status task.shutdown.graceful.timeout.ms = 5000 topic.creation.enable = true topic.tracking.allow.reset = true topic.tracking.enable = true value.converter = class org.apache.kafka.connect.json.JsonConverter worker.sync.timeout.ms = 3000 worker.unsync.backoff.ms = 300000 (org.apache.kafka.connect.runtime.distributed.DistributedConfig:371) [2025-05-07 16:31:05,608] INFO Creating Kafka admin client (org.apache.kafka.connect.runtime.WorkerConfig:281) [2025-05-07 16:31:05,611] INFO AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.controllers = [] bootstrap.servers = [localhost:9092] client.dns.lookup = use_all_dns_ips client.id = connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 enable.metrics.push = true metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS (org.apache.kafka.clients.admin.AdminClientConfig:371) [2025-05-07 16:31:05,691] INFO These configurations '[config.storage.topic, status.storage.topic, group.id, plugin.path, config.storage.replication.factor, offset.flush.interval.ms, key.converter.schemas.enable, status.storage.replication.factor, value.converter.schemas.enable, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. (org.apache.kafka.clients.admin.AdminClientConfig:380) [2025-05-07 16:31:05,691] INFO Kafka version: 3.9.0 (org.apache.kafka.common.utils.AppInfoParser:125) [2025-05-07 16:31:05,691] INFO Kafka commitId: a60e31147e6b01ee (org.apache.kafka.common.utils.AppInfoParser:126) [2025-05-07 16:31:05,691] INFO Kafka startTimeMs: 1746606665691 (org.apache.kafka.common.utils.AppInfoParser:127) [2025-05-07 16:31:06,059] INFO Kafka cluster ID: -OqG97XARMKciVb2KLRj7Q (org.apache.kafka.connect.runtime.WorkerConfig:298) [2025-05-07 16:31:06,061] INFO App info kafka.admin.client for adminclient-1 unregistered (org.apache.kafka.common.utils.AppInfoParser:89) [2025-05-07 16:31:06,067] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:685) [2025-05-07 16:31:06,068] INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:689) [2025-05-07 16:31:06,068] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:695) [2025-05-07 16:31:06,075] INFO PublicConfig values: access.control.allow.methods = access.control.allow.origin = admin.listeners = null listeners = [http://:8083] response.http.headers.config = rest.advertised.host.name = null rest.advertised.listener = null rest.advertised.port = null rest.extension.classes = [] ssl.cipher.suites = null ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS topic.tracking.allow.reset = true topic.tracking.enable = true (org.apache.kafka.connect.runtime.rest.RestServerConfig$PublicConfig:371) [2025-05-07 16:31:06,088] INFO Logging initialized @8121ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log:170) [2025-05-07 16:31:06,127] INFO Added connector for http://:8083 (org.apache.kafka.connect.runtime.rest.RestServer:125) [2025-05-07 16:31:06,127] INFO Initializing REST server (org.apache.kafka.connect.runtime.rest.RestServer:196) [2025-05-07 16:31:06,150] INFO jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.7+8-LTS-224 (org.eclipse.jetty.server.Server:375) [2025-05-07 16:31:06,179] INFO Started http_8083@76802b8c{HTTP/1.1, (http/1.1)}{0.0.0.0:8083} (org.eclipse.jetty.server.AbstractConnector:333) [2025-05-07 16:31:06,179] INFO Started @8212ms (org.eclipse.jetty.server.Server:415) [2025-05-07 16:31:06,204] INFO Advertised URI: http://99.12.11.33:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:416) [2025-05-07 16:31:06,204] INFO REST server listening at http://99.12.11.33:8083/, advertising URL http://99.12.11.33:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:216) [2025-05-07 16:31:06,204] INFO Advertised URI: http://99.12.11.33:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:416) [2025-05-07 16:31:06,204] INFO REST admin endpoints at http://99.12.11.33:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:219) [2025-05-07 16:31:06,205] INFO Advertised URI: http://99.12.11.33:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:416) [2025-05-07 16:31:06,205] INFO Setting up All Policy for ConnectorClientConfigOverride. This will allow all client configurations to be overridden (org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy:45) [2025-05-07 16:31:06,211] INFO JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false (org.apache.kafka.connect.json.JsonConverterConfig:371) [2025-05-07 16:31:06,232] INFO Kafka version: 3.9.0 (org.apache.kafka.common.utils.AppInfoParser:125) [2025-05-07 16:31:06,232] INFO Kafka commitId: a60e31147e6b01ee (org.apache.kafka.common.utils.AppInfoParser:126) [2025-05-07 16:31:06,232] INFO Kafka startTimeMs: 1746606666232 (org.apache.kafka.common.utils.AppInfoParser:127) [2025-05-07 16:31:06,239] INFO JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false (org.apache.kafka.connect.json.JsonConverterConfig:371) [2025-05-07 16:31:06,239] INFO JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false (org.apache.kafka.connect.json.JsonConverterConfig:371) [2025-05-07 16:31:06,258] INFO Advertised URI: http://99.12.11.33:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:416) [2025-05-07 16:31:06,293] INFO Kafka version: 3.9.0 (org.apache.kafka.common.utils.AppInfoParser:125) [2025-05-07 16:31:06,294] INFO Kafka commitId: a60e31147e6b01ee (org.apache.kafka.common.utils.AppInfoParser:126) [2025-05-07 16:31:06,294] INFO Kafka startTimeMs: 1746606666293 (org.apache.kafka.common.utils.AppInfoParser:127) [2025-05-07 16:31:06,297] INFO Kafka Connect worker initialization took 7644ms (org.apache.kafka.connect.cli.AbstractConnectCli:141) [2025-05-07 16:31:06,297] INFO Kafka Connect starting (org.apache.kafka.connect.runtime.Connect:67) [2025-05-07 16:31:06,300] INFO Initializing REST resources (org.apache.kafka.connect.runtime.rest.RestServer:223) [2025-05-07 16:31:06,300] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Herder starting (org.apache.kafka.connect.runtime.distributed.DistributedHerder:375) [2025-05-07 16:31:06,301] INFO Worker starting (org.apache.kafka.connect.runtime.Worker:233) [2025-05-07 16:31:06,301] INFO Starting KafkaOffsetBackingStore (org.apache.kafka.connect.storage.KafkaOffsetBackingStore:232) [2025-05-07 16:31:06,302] INFO Starting KafkaBasedLog with topic connect-offsets reportErrorsToCallback=false (org.apache.kafka.connect.util.KafkaBasedLog:254) [2025-05-07 16:31:06,302] INFO AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.controllers = [] bootstrap.servers = [localhost:9092] client.dns.lookup = use_all_dns_ips client.id = connect-cluster-shared-admin connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 enable.metrics.push = true metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS (org.apache.kafka.clients.admin.AdminClientConfig:371) [2025-05-07 16:31:06,311] INFO These configurations '[config.storage.topic, metrics.context.connect.group.id, status.storage.topic, group.id, plugin.path, config.storage.replication.factor, offset.flush.interval.ms, key.converter.schemas.enable, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, value.converter.schemas.enable, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. (org.apache.kafka.clients.admin.AdminClientConfig:380) [2025-05-07 16:31:06,311] INFO Kafka version: 3.9.0 (org.apache.kafka.common.utils.AppInfoParser:125) [2025-05-07 16:31:06,312] INFO Kafka commitId: a60e31147e6b01ee (org.apache.kafka.common.utils.AppInfoParser:126) [2025-05-07 16:31:06,312] INFO Kafka startTimeMs: 1746606666311 (org.apache.kafka.common.utils.AppInfoParser:127) [2025-05-07 16:31:06,346] INFO Adding admin resources to main listener (org.apache.kafka.connect.runtime.rest.RestServer:238) [2025-05-07 16:31:06,398] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session:334) [2025-05-07 16:31:06,398] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session:339) [2025-05-07 16:31:06,400] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session:132) [2025-05-07 16:31:07,062] INFO Started o.e.j.s.ServletContextHandler@3d225fe9{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:921) [2025-05-07 16:31:07,062] INFO REST resources initialized; server is started and ready to handle requests (org.apache.kafka.connect.runtime.rest.RestServer:303) [2025-05-07 16:31:07,062] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:77) [2025-05-07 16:31:07,323] INFO Created topic (name=connect-offsets, numPartitions=25, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at localhost:9092 (org.apache.kafka.connect.util.TopicAdmin:416) [2025-05-07 16:31:07,331] INFO ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = connect-cluster-offsets compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig:371) [2025-05-07 16:31:07,357] INFO initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:270) [2025-05-07 16:31:07,381] INFO These configurations '[config.storage.topic, metrics.context.connect.group.id, status.storage.topic, group.id, plugin.path, config.storage.replication.factor, offset.flush.interval.ms, key.converter.schemas.enable, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, value.converter.schemas.enable, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. (org.apache.kafka.clients.producer.ProducerConfig:380) [2025-05-07 16:31:07,381] INFO Kafka version: 3.9.0 (org.apache.kafka.common.utils.AppInfoParser:125) [2025-05-07 16:31:07,381] INFO Kafka commitId: a60e31147e6b01ee (org.apache.kafka.common.utils.AppInfoParser:126) [2025-05-07 16:31:07,381] INFO Kafka startTimeMs: 1746606667381 (org.apache.kafka.common.utils.AppInfoParser:127) [2025-05-07 16:31:07,389] INFO ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [localhost:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = connect-cluster-offsets client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = connect-cluster group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:371) [2025-05-07 16:31:07,403] INFO [Producer clientId=connect-cluster-offsets] Cluster ID: -OqG97XARMKciVb2KLRj7Q (org.apache.kafka.clients.Metadata:365) [2025-05-07 16:31:07,404] INFO initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:270) [2025-05-07 16:31:07,439] INFO These configurations '[config.storage.topic, metrics.context.connect.group.id, status.storage.topic, plugin.path, config.storage.replication.factor, offset.flush.interval.ms, key.converter.schemas.enable, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, value.converter.schemas.enable, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. (org.apache.kafka.clients.consumer.ConsumerConfig:380) [2025-05-07 16:31:07,439] INFO Kafka version: 3.9.0 (org.apache.kafka.common.utils.AppInfoParser:125) [2025-05-07 16:31:07,439] INFO Kafka commitId: a60e31147e6b01ee (org.apache.kafka.common.utils.AppInfoParser:126) [2025-05-07 16:31:07,439] INFO Kafka startTimeMs: 1746606667439 (org.apache.kafka.common.utils.AppInfoParser:127) [2025-05-07 16:31:07,447] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Cluster ID: -OqG97XARMKciVb2KLRj7Q (org.apache.kafka.clients.Metadata:365) [2025-05-07 16:31:07,457] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Assigned to partition(s): connect-offsets-0, connect-offsets-5, connect-offsets-10, connect-offsets-20, connect-offsets-15, connect-offsets-9, connect-offsets-11, connect-offsets-4, connect-offsets-16, connect-offsets-17, connect-offsets-3, connect-offsets-24, connect-offsets-23, connect-offsets-13, connect-offsets-18, connect-offsets-22, connect-offsets-8, connect-offsets-2, connect-offsets-12, connect-offsets-19, connect-offsets-14, connect-offsets-1, connect-offsets-6, connect-offsets-7, connect-offsets-21 (org.apache.kafka.clients.consumer.internals.ClassicKafkaConsumer:579) [2025-05-07 16:31:07,459] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-0 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,460] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-5 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,460] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-10 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,460] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-20 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,460] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-15 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,460] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-9 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,461] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-11 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,461] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-4 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,461] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-16 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,461] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-17 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,461] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-3 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,461] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-24 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,461] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-23 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,461] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-13 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,461] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-18 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,461] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-22 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,462] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-8 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,462] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-2 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,462] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-12 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,462] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-19 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,462] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-14 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,462] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-1 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,462] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-6 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,462] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-7 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,462] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Seeking to earliest offset of partition connect-offsets-21 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,514] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-10 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,515] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-8 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,516] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-14 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,516] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-12 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,516] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-2 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,516] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,516] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-6 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,516] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-4 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,517] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-24 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,517] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-18 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,517] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-16 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,517] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-22 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,517] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-20 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,518] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-9 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,518] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-7 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,518] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-13 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,518] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-11 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,518] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-1 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,518] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-5 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,519] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-3 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,519] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-23 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,519] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-17 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,519] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-15 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,519] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-21 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,520] INFO [Consumer clientId=connect-cluster-offsets, groupId=connect-cluster] Resetting offset for partition connect-offsets-19 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,520] INFO Finished reading KafkaBasedLog for topic connect-offsets (org.apache.kafka.connect.util.KafkaBasedLog:311) [2025-05-07 16:31:07,520] INFO Started KafkaBasedLog for topic connect-offsets (org.apache.kafka.connect.util.KafkaBasedLog:313) [2025-05-07 16:31:07,520] INFO Finished reading offsets topic and starting KafkaOffsetBackingStore (org.apache.kafka.connect.storage.KafkaOffsetBackingStore:249) [2025-05-07 16:31:07,522] INFO Worker started (org.apache.kafka.connect.runtime.Worker:243) [2025-05-07 16:31:07,522] INFO Starting KafkaBasedLog with topic connect-status reportErrorsToCallback=false (org.apache.kafka.connect.util.KafkaBasedLog:254) [2025-05-07 16:31:07,640] INFO Created topic (name=connect-status, numPartitions=5, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at localhost:9092 (org.apache.kafka.connect.util.TopicAdmin:416) [2025-05-07 16:31:07,640] INFO ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = connect-cluster-statuses compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 0 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig:371) [2025-05-07 16:31:07,641] INFO initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:270) [2025-05-07 16:31:07,648] INFO These configurations '[config.storage.topic, metrics.context.connect.group.id, status.storage.topic, group.id, plugin.path, config.storage.replication.factor, offset.flush.interval.ms, key.converter.schemas.enable, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, value.converter.schemas.enable, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. (org.apache.kafka.clients.producer.ProducerConfig:380) [2025-05-07 16:31:07,648] INFO Kafka version: 3.9.0 (org.apache.kafka.common.utils.AppInfoParser:125) [2025-05-07 16:31:07,648] INFO Kafka commitId: a60e31147e6b01ee (org.apache.kafka.common.utils.AppInfoParser:126) [2025-05-07 16:31:07,648] INFO Kafka startTimeMs: 1746606667648 (org.apache.kafka.common.utils.AppInfoParser:127) [2025-05-07 16:31:07,649] INFO ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [localhost:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = connect-cluster-statuses client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = connect-cluster group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:371) [2025-05-07 16:31:07,651] INFO initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:270) [2025-05-07 16:31:07,656] INFO These configurations '[config.storage.topic, metrics.context.connect.group.id, status.storage.topic, plugin.path, config.storage.replication.factor, offset.flush.interval.ms, key.converter.schemas.enable, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, value.converter.schemas.enable, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. (org.apache.kafka.clients.consumer.ConsumerConfig:380) [2025-05-07 16:31:07,656] INFO Kafka version: 3.9.0 (org.apache.kafka.common.utils.AppInfoParser:125) [2025-05-07 16:31:07,657] INFO Kafka commitId: a60e31147e6b01ee (org.apache.kafka.common.utils.AppInfoParser:126) [2025-05-07 16:31:07,657] INFO Kafka startTimeMs: 1746606667656 (org.apache.kafka.common.utils.AppInfoParser:127) [2025-05-07 16:31:07,657] INFO [Producer clientId=connect-cluster-statuses] Cluster ID: -OqG97XARMKciVb2KLRj7Q (org.apache.kafka.clients.Metadata:365) [2025-05-07 16:31:07,666] INFO [Consumer clientId=connect-cluster-statuses, groupId=connect-cluster] Cluster ID: -OqG97XARMKciVb2KLRj7Q (org.apache.kafka.clients.Metadata:365) [2025-05-07 16:31:07,668] INFO [Consumer clientId=connect-cluster-statuses, groupId=connect-cluster] Assigned to partition(s): connect-status-0, connect-status-4, connect-status-1, connect-status-2, connect-status-3 (org.apache.kafka.clients.consumer.internals.ClassicKafkaConsumer:579) [2025-05-07 16:31:07,668] INFO [Consumer clientId=connect-cluster-statuses, groupId=connect-cluster] Seeking to earliest offset of partition connect-status-0 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,668] INFO [Consumer clientId=connect-cluster-statuses, groupId=connect-cluster] Seeking to earliest offset of partition connect-status-4 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,668] INFO [Consumer clientId=connect-cluster-statuses, groupId=connect-cluster] Seeking to earliest offset of partition connect-status-1 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,668] INFO [Consumer clientId=connect-cluster-statuses, groupId=connect-cluster] Seeking to earliest offset of partition connect-status-2 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,668] INFO [Consumer clientId=connect-cluster-statuses, groupId=connect-cluster] Seeking to earliest offset of partition connect-status-3 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,680] INFO [Consumer clientId=connect-cluster-statuses, groupId=connect-cluster] Resetting offset for partition connect-status-1 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,680] INFO [Consumer clientId=connect-cluster-statuses, groupId=connect-cluster] Resetting offset for partition connect-status-2 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,680] INFO [Consumer clientId=connect-cluster-statuses, groupId=connect-cluster] Resetting offset for partition connect-status-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,680] INFO [Consumer clientId=connect-cluster-statuses, groupId=connect-cluster] Resetting offset for partition connect-status-3 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,680] INFO [Consumer clientId=connect-cluster-statuses, groupId=connect-cluster] Resetting offset for partition connect-status-4 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,681] INFO Finished reading KafkaBasedLog for topic connect-status (org.apache.kafka.connect.util.KafkaBasedLog:311) [2025-05-07 16:31:07,681] INFO Started KafkaBasedLog for topic connect-status (org.apache.kafka.connect.util.KafkaBasedLog:313) [2025-05-07 16:31:07,683] INFO Starting KafkaConfigBackingStore (org.apache.kafka.connect.storage.KafkaConfigBackingStore:378) [2025-05-07 16:31:07,683] INFO Starting KafkaBasedLog with topic connect-configs reportErrorsToCallback=false (org.apache.kafka.connect.util.KafkaBasedLog:254) [2025-05-07 16:31:07,730] INFO Created topic (name=connect-configs, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at localhost:9092 (org.apache.kafka.connect.util.TopicAdmin:416) [2025-05-07 16:31:07,731] INFO ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = connect-cluster-configs compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig:371) [2025-05-07 16:31:07,732] INFO initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:270) [2025-05-07 16:31:07,737] INFO These configurations '[config.storage.topic, metrics.context.connect.group.id, status.storage.topic, group.id, plugin.path, config.storage.replication.factor, offset.flush.interval.ms, key.converter.schemas.enable, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, value.converter.schemas.enable, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. (org.apache.kafka.clients.producer.ProducerConfig:380) [2025-05-07 16:31:07,737] INFO Kafka version: 3.9.0 (org.apache.kafka.common.utils.AppInfoParser:125) [2025-05-07 16:31:07,737] INFO Kafka commitId: a60e31147e6b01ee (org.apache.kafka.common.utils.AppInfoParser:126) [2025-05-07 16:31:07,737] INFO Kafka startTimeMs: 1746606667737 (org.apache.kafka.common.utils.AppInfoParser:127) [2025-05-07 16:31:07,738] INFO ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [localhost:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = connect-cluster-configs client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = connect-cluster group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:371) [2025-05-07 16:31:07,739] INFO initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:270) [2025-05-07 16:31:07,742] INFO [Producer clientId=connect-cluster-configs] Cluster ID: -OqG97XARMKciVb2KLRj7Q (org.apache.kafka.clients.Metadata:365) [2025-05-07 16:31:07,746] INFO These configurations '[config.storage.topic, metrics.context.connect.group.id, status.storage.topic, plugin.path, config.storage.replication.factor, offset.flush.interval.ms, key.converter.schemas.enable, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, value.converter.schemas.enable, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. (org.apache.kafka.clients.consumer.ConsumerConfig:380) [2025-05-07 16:31:07,746] INFO Kafka version: 3.9.0 (org.apache.kafka.common.utils.AppInfoParser:125) [2025-05-07 16:31:07,746] INFO Kafka commitId: a60e31147e6b01ee (org.apache.kafka.common.utils.AppInfoParser:126) [2025-05-07 16:31:07,746] INFO Kafka startTimeMs: 1746606667746 (org.apache.kafka.common.utils.AppInfoParser:127) [2025-05-07 16:31:07,751] INFO [Consumer clientId=connect-cluster-configs, groupId=connect-cluster] Cluster ID: -OqG97XARMKciVb2KLRj7Q (org.apache.kafka.clients.Metadata:365) [2025-05-07 16:31:07,753] INFO [Consumer clientId=connect-cluster-configs, groupId=connect-cluster] Assigned to partition(s): connect-configs-0 (org.apache.kafka.clients.consumer.internals.ClassicKafkaConsumer:579) [2025-05-07 16:31:07,753] INFO [Consumer clientId=connect-cluster-configs, groupId=connect-cluster] Seeking to earliest offset of partition connect-configs-0 (org.apache.kafka.clients.consumer.internals.SubscriptionState:715) [2025-05-07 16:31:07,766] INFO [Consumer clientId=connect-cluster-configs, groupId=connect-cluster] Resetting offset for partition connect-configs-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:31:07,766] INFO Finished reading KafkaBasedLog for topic connect-configs (org.apache.kafka.connect.util.KafkaBasedLog:311) [2025-05-07 16:31:07,766] INFO Started KafkaBasedLog for topic connect-configs (org.apache.kafka.connect.util.KafkaBasedLog:313) [2025-05-07 16:31:07,766] INFO Started KafkaConfigBackingStore (org.apache.kafka.connect.storage.KafkaConfigBackingStore:402) [2025-05-07 16:31:07,777] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Cluster ID: -OqG97XARMKciVb2KLRj7Q (org.apache.kafka.clients.Metadata:365) [2025-05-07 16:31:08,593] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Discovered group coordinator iZuf66nl2clxz2d4rj261wZ:9092 (id: 2147483647 rack: null) (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:937) [2025-05-07 16:31:08,595] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Rebalance started (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:243) [2025-05-07 16:31:08,595] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] (Re-)joining group (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:605) [2025-05-07 16:31:08,619] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] (Re-)joining group (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:605) [2025-05-07 16:31:08,635] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Successfully joined group with generation Generation{generationId=1, memberId='connect-99.12.11.33:8083-ce169dfa-416b-4d0d-abd5-8b92fe0e128c', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:666) [2025-05-07 16:31:08,704] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Successfully synced group in generation Generation{generationId=1, memberId='connect-99.12.11.33:8083-ce169dfa-416b-4d0d-abd5-8b92fe0e128c', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:843) [2025-05-07 16:31:08,705] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Joined group at generation 1 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-99.12.11.33:8083-ce169dfa-416b-4d0d-abd5-8b92fe0e128c', leaderUrl='http://99.12.11.33:8083/', offset=-1, connectorIds=[], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2648) [2025-05-07 16:31:08,706] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Herder started (org.apache.kafka.connect.runtime.distributed.DistributedHerder:387) [2025-05-07 16:31:08,706] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Starting connectors and tasks using config offset -1 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1979) [2025-05-07 16:31:08,706] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2008) [2025-05-07 16:31:08,771] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Session key updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2510) [2025-05-07 16:32:25,629] INFO Loading the custom source info struct maker plugin: io.debezium.connector.mysql.MySqlSourceInfoStructMaker (io.debezium.config.CommonConnectorConfig:1701) [2025-05-07 16:32:25,983] INFO Using 'SHOW MASTER STATUS' to get binary log status (io.debezium.connector.mysql.jdbc.MySqlConnection:41) [2025-05-07 16:32:25,991] INFO Successfully tested connection for jdbc:mysql://localhost:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&zeroDateTimeBehavior=CONVERT_TO_NULL&connectTimeout=30000 with user 'omskafka' (io.debezium.connector.binlog.BinlogConnector:66) [2025-05-07 16:32:26,000] INFO Connection gracefully closed (io.debezium.jdbc.JdbcConnection:983) [2025-05-07 16:32:26,005] INFO AbstractConfig values: (org.apache.kafka.common.config.AbstractConfig:371) [2025-05-07 16:32:26,018] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Connector mysql-connector config updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2448) [2025-05-07 16:32:26,019] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Rebalance started (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:243) [2025-05-07 16:32:26,019] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] (Re-)joining group (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:605) [2025-05-07 16:32:26,023] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Successfully joined group with generation Generation{generationId=2, memberId='connect-99.12.11.33:8083-ce169dfa-416b-4d0d-abd5-8b92fe0e128c', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:666) [2025-05-07 16:32:26,027] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Successfully synced group in generation Generation{generationId=2, memberId='connect-99.12.11.33:8083-ce169dfa-416b-4d0d-abd5-8b92fe0e128c', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:843) [2025-05-07 16:32:26,027] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Joined group at generation 2 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-99.12.11.33:8083-ce169dfa-416b-4d0d-abd5-8b92fe0e128c', leaderUrl='http://99.12.11.33:8083/', offset=2, connectorIds=[mysql-connector], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2648) [2025-05-07 16:32:26,028] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Starting connectors and tasks using config offset 2 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1979) [2025-05-07 16:32:26,029] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Starting connector mysql-connector (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2097) [2025-05-07 16:32:26,032] INFO [mysql-connector|worker] Creating connector mysql-connector of type io.debezium.connector.mysql.MySqlConnector (org.apache.kafka.connect.runtime.Worker:313) [2025-05-07 16:32:26,032] INFO [mysql-connector|worker] SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [dev_kafka_group] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null (org.apache.kafka.connect.runtime.SourceConnectorConfig:371) [2025-05-07 16:32:26,033] INFO [mysql-connector|worker] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [dev_kafka_group] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:371) [2025-05-07 16:32:26,037] INFO [mysql-connector|worker] EnrichedSourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 1 topic.creation.dev_kafka_group.exclude = [] topic.creation.dev_kafka_group.include = [OMS.dev_kafka.*] topic.creation.dev_kafka_group.partitions = 1 topic.creation.dev_kafka_group.replication.factor = 1 topic.creation.groups = [dev_kafka_group] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null (org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig:371) [2025-05-07 16:32:26,038] INFO [mysql-connector|worker] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 1 topic.creation.dev_kafka_group.exclude = [] topic.creation.dev_kafka_group.include = [OMS.dev_kafka.*] topic.creation.dev_kafka_group.partitions = 1 topic.creation.dev_kafka_group.replication.factor = 1 topic.creation.groups = [dev_kafka_group] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:371) [2025-05-07 16:32:26,043] INFO [mysql-connector|worker] Instantiated connector mysql-connector with version 3.2.0.Alpha1 of type class io.debezium.connector.mysql.MySqlConnector (org.apache.kafka.connect.runtime.Worker:335) [2025-05-07 16:32:26,043] INFO [mysql-connector|worker] Finished creating connector mysql-connector (org.apache.kafka.connect.runtime.Worker:356) [2025-05-07 16:32:26,044] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2008) [2025-05-07 16:32:26,053] INFO SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [dev_kafka_group] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null (org.apache.kafka.connect.runtime.SourceConnectorConfig:371) [2025-05-07 16:32:26,053] INFO EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [dev_kafka_group] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:371) [2025-05-07 16:32:26,054] INFO EnrichedSourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 1 topic.creation.dev_kafka_group.exclude = [] topic.creation.dev_kafka_group.include = [OMS.dev_kafka.*] topic.creation.dev_kafka_group.partitions = 1 topic.creation.dev_kafka_group.replication.factor = 1 topic.creation.groups = [dev_kafka_group] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null (org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig:371) [2025-05-07 16:32:26,055] INFO EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 1 topic.creation.dev_kafka_group.exclude = [] topic.creation.dev_kafka_group.include = [OMS.dev_kafka.*] topic.creation.dev_kafka_group.partitions = 1 topic.creation.dev_kafka_group.replication.factor = 1 topic.creation.groups = [dev_kafka_group] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:371) [2025-05-07 16:32:26,067] INFO [0:0:0:0:0:0:0:1] - - [07/May/2025:08:32:25 +0000] "POST /connectors HTTP/1.1" 201 2124 "-" "curl/7.29.0" 574 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-05-07 16:32:26,082] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Tasks [mysql-connector-0] configs updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2467) [2025-05-07 16:32:26,083] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Rebalance started (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:243) [2025-05-07 16:32:26,084] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] (Re-)joining group (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:605) [2025-05-07 16:32:26,085] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Successfully joined group with generation Generation{generationId=3, memberId='connect-99.12.11.33:8083-ce169dfa-416b-4d0d-abd5-8b92fe0e128c', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:666) [2025-05-07 16:32:26,093] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Successfully synced group in generation Generation{generationId=3, memberId='connect-99.12.11.33:8083-ce169dfa-416b-4d0d-abd5-8b92fe0e128c', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:843) [2025-05-07 16:32:26,093] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Joined group at generation 3 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-99.12.11.33:8083-ce169dfa-416b-4d0d-abd5-8b92fe0e128c', leaderUrl='http://99.12.11.33:8083/', offset=4, connectorIds=[mysql-connector], taskIds=[mysql-connector-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2648) [2025-05-07 16:32:26,094] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Starting connectors and tasks using config offset 4 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1979) [2025-05-07 16:32:26,095] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Starting task mysql-connector-0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2022) [2025-05-07 16:32:26,099] INFO [mysql-connector|task-0] Creating task mysql-connector-0 (org.apache.kafka.connect.runtime.Worker:646) [2025-05-07 16:32:26,101] INFO [mysql-connector|task-0] ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = mysql-connector predicates = [] tasks.max = 1 tasks.max.enforce = true transforms = [] value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig:371) [2025-05-07 16:32:26,101] INFO [mysql-connector|task-0] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = mysql-connector predicates = [] tasks.max = 1 tasks.max.enforce = true transforms = [] value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:371) [2025-05-07 16:32:26,102] INFO [mysql-connector|task-0] TaskConfig values: task.class = class io.debezium.connector.mysql.MySqlConnectorTask (org.apache.kafka.connect.runtime.TaskConfig:371) [2025-05-07 16:32:26,105] INFO [mysql-connector|task-0] Instantiated task mysql-connector-0 with version 3.2.0.Alpha1 of type io.debezium.connector.mysql.MySqlConnectorTask (org.apache.kafka.connect.runtime.Worker:665) [2025-05-07 16:32:26,105] INFO [mysql-connector|task-0] JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true (org.apache.kafka.connect.json.JsonConverterConfig:371) [2025-05-07 16:32:26,105] INFO [mysql-connector|task-0] Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task mysql-connector-0 using the worker config (org.apache.kafka.connect.runtime.Worker:678) [2025-05-07 16:32:26,106] INFO [mysql-connector|task-0] JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true (org.apache.kafka.connect.json.JsonConverterConfig:371) [2025-05-07 16:32:26,106] INFO [mysql-connector|task-0] Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task mysql-connector-0 using the worker config (org.apache.kafka.connect.runtime.Worker:684) [2025-05-07 16:32:26,106] INFO [mysql-connector|task-0] Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task mysql-connector-0 using the worker config (org.apache.kafka.connect.runtime.Worker:691) [2025-05-07 16:32:26,109] INFO [mysql-connector|task-0] Initializing: org.apache.kafka.connect.runtime.TransformationChain{} (org.apache.kafka.connect.runtime.Worker:1795) [2025-05-07 16:32:26,110] INFO [mysql-connector|task-0] SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [dev_kafka_group] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null (org.apache.kafka.connect.runtime.SourceConnectorConfig:371) [2025-05-07 16:32:26,110] INFO [mysql-connector|task-0] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [dev_kafka_group] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:371) [2025-05-07 16:32:26,111] INFO [mysql-connector|task-0] EnrichedSourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 1 topic.creation.dev_kafka_group.exclude = [] topic.creation.dev_kafka_group.include = [OMS.dev_kafka.*] topic.creation.dev_kafka_group.partitions = 1 topic.creation.dev_kafka_group.replication.factor = 1 topic.creation.groups = [dev_kafka_group] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null (org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig:371) [2025-05-07 16:32:26,111] INFO [mysql-connector|task-0] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 1 topic.creation.dev_kafka_group.exclude = [] topic.creation.dev_kafka_group.include = [OMS.dev_kafka.*] topic.creation.dev_kafka_group.partitions = 1 topic.creation.dev_kafka_group.replication.factor = 1 topic.creation.groups = [dev_kafka_group] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:371) [2025-05-07 16:32:26,112] INFO [mysql-connector|task-0] ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = connector-producer-mysql-connector-0 compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 9223372036854775807 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig:371) [2025-05-07 16:32:26,112] INFO [mysql-connector|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:270) [2025-05-07 16:32:26,116] INFO [mysql-connector|task-0] These configurations '[metrics.context.connect.kafka.cluster.id, metrics.context.connect.group.id]' were supplied but are not used yet. (org.apache.kafka.clients.producer.ProducerConfig:380) [2025-05-07 16:32:26,116] INFO [mysql-connector|task-0] Kafka version: 3.9.0 (org.apache.kafka.common.utils.AppInfoParser:125) [2025-05-07 16:32:26,116] INFO [mysql-connector|task-0] Kafka commitId: a60e31147e6b01ee (org.apache.kafka.common.utils.AppInfoParser:126) [2025-05-07 16:32:26,116] INFO [mysql-connector|task-0] Kafka startTimeMs: 1746606746116 (org.apache.kafka.common.utils.AppInfoParser:127) [2025-05-07 16:32:26,118] INFO [mysql-connector|task-0] AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.controllers = [] bootstrap.servers = [localhost:9092] client.dns.lookup = use_all_dns_ips client.id = connector-adminclient-mysql-connector-0 connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 enable.metrics.push = true metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS (org.apache.kafka.clients.admin.AdminClientConfig:371) [2025-05-07 16:32:26,119] INFO [mysql-connector|task-0] These configurations '[config.storage.topic, metrics.context.connect.group.id, status.storage.topic, group.id, plugin.path, config.storage.replication.factor, offset.flush.interval.ms, key.converter.schemas.enable, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, value.converter.schemas.enable, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. (org.apache.kafka.clients.admin.AdminClientConfig:380) [2025-05-07 16:32:26,120] INFO [mysql-connector|task-0] Kafka version: 3.9.0 (org.apache.kafka.common.utils.AppInfoParser:125) [2025-05-07 16:32:26,120] INFO [mysql-connector|task-0] Kafka commitId: a60e31147e6b01ee (org.apache.kafka.common.utils.AppInfoParser:126) [2025-05-07 16:32:26,120] INFO [mysql-connector|task-0] Kafka startTimeMs: 1746606746120 (org.apache.kafka.common.utils.AppInfoParser:127) [2025-05-07 16:32:26,125] INFO [mysql-connector|task-0] [Producer clientId=connector-producer-mysql-connector-0] Cluster ID: -OqG97XARMKciVb2KLRj7Q (org.apache.kafka.clients.Metadata:365) [2025-05-07 16:32:26,132] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2008) [2025-05-07 16:32:26,134] INFO [mysql-connector|task-0] Starting MySqlConnectorTask with configuration: connector.class = io.debezium.connector.mysql.MySqlConnector snapshot.locking.mode = minimal topic.creation.dev_kafka_group.include = OMS.dev_kafka.* database.ignore.ddl.commands = CREATE PROCEDURE,CREATE FUNCTION,CREATE TRIGGER,CREATE EVENT,CREATE VIEW,ALTER PROCEDURE,ALTER FUNCTION,ALTER TRIGGER,ALTER EVENT,ALTER VIEW,DROP PROCEDURE,DROP FUNCTION,DROP TRIGGER,DROP EVENT,DROP VIEW max.queue.size = 81920 topic.creation.default.partitions = 1 schema.history.internal.kafka.topic.acks = all schema.history.internal.kafka.topic.compression.type = none schema.history.internal.producer.override.acks = all schema.history.internal.store.only.monitored.tables.ddl = true schema.history.internal.store.only.captured.databases.ddl = false include.schema.changes = true topic.prefix = OMS log.mining.query.filter = DEBUG schema.history.internal.kafka.topic = mysql5_schema_history poll.interval.ms = 500 schema.history.internal.producer.override.linger.ms = 100 topic.creation.default.replication.factor = 1 signal.data.collection = dev_kafka.debezium_signal snapshot.fetch.size = 2000 table.ignore.builtin.schemas = false log.connector = DEBUG log.mining.transaction.retention.hours = 1 snapshot.include.collection.list = dev_kafka.* database.user = omskafka topic.creation.include = OMS.dev_kafka.* datatype.propagate.source.type = geometry,json database.server.id = 1 topic.creation.default.cleanup.policy = delete signal.poll.interval.ms = 5000 schema.history.internal.kafka.bootstrap.servers = localhost:9092 topic.creation.default.retention.ms = 604800000 schema.history.internal.skip.unparseable.ddl = true database.port = 3306 topic.creation.groups = dev_kafka_group column.propagate.source.type = .* task.class = io.debezium.connector.mysql.MySqlConnectorTask database.hostname = localhost binlog.row.image = FULL database.password = ******** schema.name.adjustment.mode = avro name = mysql-connector table.include.list = dev_kafka.products max.batch.size = 20480 database.include.list = dev_kafka snapshot.mode = initial (io.debezium.connector.common.BaseSourceTask:253) [2025-05-07 16:32:26,135] INFO [mysql-connector|task-0] Loading the custom source info struct maker plugin: io.debezium.connector.mysql.MySqlSourceInfoStructMaker (io.debezium.config.CommonConnectorConfig:1701) [2025-05-07 16:32:26,136] INFO [mysql-connector|task-0] Loading the custom topic naming strategy plugin: io.debezium.schema.DefaultTopicNamingStrategy (io.debezium.config.CommonConnectorConfig:1401) [2025-05-07 16:32:26,166] INFO [mysql-connector|task-0] Using 'SHOW MASTER STATUS' to get binary log status (io.debezium.connector.mysql.jdbc.MySqlConnection:41) [2025-05-07 16:32:26,182] INFO [mysql-connector|task-0] No previous offsets found (io.debezium.connector.common.BaseSourceTask:539) [2025-05-07 16:32:26,223] INFO [mysql-connector|task-0] KafkaSchemaHistory Consumer config: {key.deserializer=org.apache.kafka.common.serialization.StringDeserializer, value.deserializer=org.apache.kafka.common.serialization.StringDeserializer, enable.auto.commit=false, group.id=OMS-schemahistory, bootstrap.servers=localhost:9092, fetch.min.bytes=1, session.timeout.ms=10000, auto.offset.reset=earliest, client.id=OMS-schemahistory} (io.debezium.storage.kafka.history.KafkaSchemaHistory:249) [2025-05-07 16:32:26,223] INFO [mysql-connector|task-0] KafkaSchemaHistory Producer config: {enable.idempotence=false, value.serializer=org.apache.kafka.common.serialization.StringSerializer, batch.size=32768, override.acks=all, bootstrap.servers=localhost:9092, max.in.flight.requests.per.connection=1, buffer.memory=1048576, key.serializer=org.apache.kafka.common.serialization.StringSerializer, override.linger.ms=100, client.id=OMS-schemahistory} (io.debezium.storage.kafka.history.KafkaSchemaHistory:250) [2025-05-07 16:32:26,224] INFO [mysql-connector|task-0] Requested thread factory for component MySqlConnector, id = OMS named = db-history-config-check (io.debezium.util.Threads:270) [2025-05-07 16:32:26,226] INFO [mysql-connector|task-0] ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 32768 bootstrap.servers = [localhost:9092] buffer.memory = 1048576 client.dns.lookup = use_all_dns_ips client.id = OMS-schemahistory compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer (org.apache.kafka.clients.producer.ProducerConfig:371) [2025-05-07 16:32:26,226] INFO [mysql-connector|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:270) [2025-05-07 16:32:26,229] INFO [mysql-connector|task-0] These configurations '[override.acks, override.linger.ms]' were supplied but are not used yet. (org.apache.kafka.clients.producer.ProducerConfig:380) [2025-05-07 16:32:26,229] INFO [mysql-connector|task-0] Kafka version: 3.9.0 (org.apache.kafka.common.utils.AppInfoParser:125) [2025-05-07 16:32:26,229] INFO [mysql-connector|task-0] Kafka commitId: a60e31147e6b01ee (org.apache.kafka.common.utils.AppInfoParser:126) [2025-05-07 16:32:26,230] INFO [mysql-connector|task-0] Kafka startTimeMs: 1746606746229 (org.apache.kafka.common.utils.AppInfoParser:127) [2025-05-07 16:32:26,240] INFO [mysql-connector|task-0] [Producer clientId=OMS-schemahistory] Cluster ID: -OqG97XARMKciVb2KLRj7Q (org.apache.kafka.clients.Metadata:365) [2025-05-07 16:32:26,274] INFO [mysql-connector|task-0] Using 'SHOW MASTER STATUS' to get binary log status (io.debezium.connector.mysql.jdbc.MySqlConnection:41) [2025-05-07 16:32:26,289] INFO [mysql-connector|task-0] Closing connection before starting schema recovery (io.debezium.connector.mysql.MySqlConnectorTask:123) [2025-05-07 16:32:26,290] INFO [mysql-connector|task-0] Connection gracefully closed (io.debezium.jdbc.JdbcConnection:983) [2025-05-07 16:32:26,291] INFO [mysql-connector|task-0] Connector started for the first time. (io.debezium.connector.common.BaseSourceTask:92) [2025-05-07 16:32:26,292] INFO [mysql-connector|task-0] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [localhost:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = OMS-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = OMS-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:371) [2025-05-07 16:32:26,292] INFO [mysql-connector|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:270) [2025-05-07 16:32:26,295] INFO [mysql-connector|task-0] Kafka version: 3.9.0 (org.apache.kafka.common.utils.AppInfoParser:125) [2025-05-07 16:32:26,295] INFO [mysql-connector|task-0] Kafka commitId: a60e31147e6b01ee (org.apache.kafka.common.utils.AppInfoParser:126) [2025-05-07 16:32:26,295] INFO [mysql-connector|task-0] Kafka startTimeMs: 1746606746295 (org.apache.kafka.common.utils.AppInfoParser:127) [2025-05-07 16:32:26,299] INFO [mysql-connector|task-0] [Consumer clientId=OMS-schemahistory, groupId=OMS-schemahistory] Cluster ID: -OqG97XARMKciVb2KLRj7Q (org.apache.kafka.clients.Metadata:365) [2025-05-07 16:32:26,306] INFO [mysql-connector|task-0] [Consumer clientId=OMS-schemahistory, groupId=OMS-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1056) [2025-05-07 16:32:26,306] INFO [mysql-connector|task-0] [Consumer clientId=OMS-schemahistory, groupId=OMS-schemahistory] Request joining group due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1103) [2025-05-07 16:32:26,309] INFO [mysql-connector|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:685) [2025-05-07 16:32:26,309] INFO [mysql-connector|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:689) [2025-05-07 16:32:26,309] INFO [mysql-connector|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:689) [2025-05-07 16:32:26,309] INFO [mysql-connector|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:695) [2025-05-07 16:32:26,311] INFO [mysql-connector|task-0] App info kafka.consumer for OMS-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser:89) [2025-05-07 16:32:26,312] INFO [mysql-connector|task-0] AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.controllers = [] bootstrap.servers = [localhost:9092] client.dns.lookup = use_all_dns_ips client.id = OMS-schemahistory connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 enable.metrics.push = true metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS (org.apache.kafka.clients.admin.AdminClientConfig:371) [2025-05-07 16:32:26,313] INFO [mysql-connector|task-0] These configurations '[enable.idempotence, value.serializer, batch.size, override.acks, max.in.flight.requests.per.connection, buffer.memory, key.serializer, override.linger.ms]' were supplied but are not used yet. (org.apache.kafka.clients.admin.AdminClientConfig:380) [2025-05-07 16:32:26,313] INFO [mysql-connector|task-0] Kafka version: 3.9.0 (org.apache.kafka.common.utils.AppInfoParser:125) [2025-05-07 16:32:26,313] INFO [mysql-connector|task-0] Kafka commitId: a60e31147e6b01ee (org.apache.kafka.common.utils.AppInfoParser:126) [2025-05-07 16:32:26,313] INFO [mysql-connector|task-0] Kafka startTimeMs: 1746606746313 (org.apache.kafka.common.utils.AppInfoParser:127) [2025-05-07 16:32:26,384] INFO [mysql-connector|task-0] Database schema history topic '(name=mysql5_schema_history, numPartitions=1, replicationFactor=default, replicasAssignments=null, configs={cleanup.policy=delete, retention.ms=9223372036854775807, retention.bytes=-1})' created (io.debezium.storage.kafka.history.KafkaSchemaHistory:558) [2025-05-07 16:32:26,384] INFO [mysql-connector|task-0] App info kafka.admin.client for OMS-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser:89) [2025-05-07 16:32:26,385] INFO [mysql-connector|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:685) [2025-05-07 16:32:26,385] INFO [mysql-connector|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:689) [2025-05-07 16:32:26,385] INFO [mysql-connector|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:695) [2025-05-07 16:32:26,386] INFO [mysql-connector|task-0] Reconnecting after finishing schema recovery (io.debezium.connector.mysql.MySqlConnectorTask:136) [2025-05-07 16:32:26,406] INFO [mysql-connector|task-0] No previous offset found (io.debezium.connector.mysql.MySqlConnectorTask:161) [2025-05-07 16:32:26,422] INFO [mysql-connector|task-0] Requested thread factory for component MySqlConnector, id = OMS named = SignalProcessor (io.debezium.util.Threads:270) [2025-05-07 16:32:26,435] INFO [mysql-connector|task-0] Requested thread factory for component MySqlConnector, id = OMS named = change-event-source-coordinator (io.debezium.util.Threads:270) [2025-05-07 16:32:26,435] INFO [mysql-connector|task-0] Requested thread factory for component MySqlConnector, id = OMS named = blocking-snapshot (io.debezium.util.Threads:270) [2025-05-07 16:32:26,437] INFO [mysql-connector|task-0] Creating thread debezium-mysqlconnector-OMS-change-event-source-coordinator (io.debezium.util.Threads:287) [2025-05-07 16:32:26,437] INFO [mysql-connector|task-0] WorkerSourceTask{id=mysql-connector-0} Source task finished initialization and start (org.apache.kafka.connect.runtime.AbstractWorkerSourceTask:280) [2025-05-07 16:32:26,440] INFO [mysql-connector|task-0] Metrics registered (io.debezium.pipeline.ChangeEventSourceCoordinator:137) [2025-05-07 16:32:26,441] INFO [mysql-connector|task-0] Context created (io.debezium.pipeline.ChangeEventSourceCoordinator:140) [2025-05-07 16:32:26,449] INFO [mysql-connector|task-0] According to the connector configuration both schema and data will be snapshot. (io.debezium.relational.RelationalSnapshotChangeEventSource:282) [2025-05-07 16:32:26,452] INFO [mysql-connector|task-0] Snapshot step 1 - Preparing (io.debezium.relational.RelationalSnapshotChangeEventSource:135) [2025-05-07 16:32:26,453] INFO [mysql-connector|task-0] Snapshot step 2 - Determining captured tables (io.debezium.relational.RelationalSnapshotChangeEventSource:144) [2025-05-07 16:32:26,453] INFO [mysql-connector|task-0] Read list of available databases (io.debezium.connector.binlog.BinlogSnapshotChangeEventSource:116) [2025-05-07 16:32:26,455] INFO [mysql-connector|task-0] list of available databases is: [information_schema, dev_kafka, diw_oms_test, mydata, mysql, performance_schema, sys] (io.debezium.connector.binlog.BinlogSnapshotChangeEventSource:118) [2025-05-07 16:32:26,455] INFO [mysql-connector|task-0] Read list of available tables in each database (io.debezium.connector.binlog.BinlogSnapshotChangeEventSource:126) [2025-05-07 16:32:26,791] INFO [mysql-connector|task-0] snapshot continuing with database(s): [dev_kafka] (io.debezium.connector.binlog.BinlogSnapshotChangeEventSource:147) [2025-05-07 16:32:26,799] INFO [mysql-connector|task-0] Adding table diw_oms_test.yarn_sale_batch to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,799] INFO [mysql-connector|task-0] Adding table diw_oms_test.pu_order_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,799] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_depot_head to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table diw_oms_test.deposit_collection_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table diw_oms_test.money_in to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table dev_kafka.diw_approval to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_organization to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table dev_kafka.payment_settlement_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_order_pay_detail to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_material_initial_stock to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_role to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_msg to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_log to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table diw_oms_test.yarn_sales_order_del to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table dev_kafka.button_stock_info to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_margin_batch to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_stock_batch to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table diw_oms_test.pu_contract_valence_of_vertex to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table diw_oms_test.sale_contract to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table dev_kafka.sale_payment_settlement to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table dev_kafka.yarn_transfer_detail to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,800] INFO [mysql-connector|task-0] Adding table diw_oms_test.tms_sale_order to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table diw_oms_test.address_book to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_user_business to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_account_item to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table dev_kafka.yarn_sale_contract_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_order_number to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table diw_oms_test.yarn_sales_order_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table dev_kafka.money_in_link to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table dev_kafka.yarn_transfer_company_detail to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table diw_oms_test.diw_reports to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table dev_kafka.yarn_sales_order_link to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table dev_kafka.tms_sale_order_log to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_material_property to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table diw_oms_test.sys_dict_value to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table diw_oms_test.diw_batch_basic_information to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_depot to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_msg to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table dev_kafka.yarn_pu_order_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_account_head to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_order_batch to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table dev_kafka.diw_invoice_detail to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table dev_kafka.pu_order to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,801] INFO [mysql-connector|task-0] Adding table diw_oms_test.button_contract_change to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table diw_oms_test.yarn_sale_contract to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table diw_oms_test.yarn_pu_contract_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table diw_oms_test.yarn_contract_change_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table dev_kafka.yarn_pu_contract to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table diw_oms_test.sale_payment_settlement to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table diw_oms_test.payment_settlement_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table diw_oms_test.price_history to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table dev_kafka.sales_order_link to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_in_out_item to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table diw_oms_test.diw_invoice_info to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table mydata.tt to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table diw_oms_test.sys_dict to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_depot_head to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table diw_oms_test.deposit_collection to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_material_current_stock to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table diw_oms_test.sales_order_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table diw_oms_test.sale_batch to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_tenant to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_margin_pay_detail to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_margin_pay to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,802] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_account to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_material_property to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table dev_kafka.yarn_transfer_main to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table diw_oms_test.sales_order_link to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table dev_kafka.yarn_contract_change_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_material_category to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table dev_kafka.yarn_sales_order_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table diw_oms_test.sale_contract_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table diw_oms_test.yarn_sales_order_update_log to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_role to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_in_out_item to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_account_head to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table dev_kafka.payment_settlement to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table diw_oms_test.kafka_datasync to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table dev_kafka.yarn_sale_batch to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table dev_kafka.kafka_datasync to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table dev_kafka.money_in_detail to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_material_attribute to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_serial_number to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_material to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table dev_kafka.sale_payment_settlement_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,803] INFO [mysql-connector|task-0] Adding table dev_kafka.contract_center_link to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table diw_oms_test.diw_user_tenant to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_material_attribute to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table dev_kafka.diw_user_tenant to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table diw_oms_test.sales_order to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table dev_kafka.diw_reports to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_serial_number to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table diw_oms_test.diw_yarn_batch_info to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table diw_oms_test.money_in_detail to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_order_pay to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_margin_batch to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_platform_config to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_material_extend to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table dev_kafka.stock_report to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_region to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table dev_kafka.sale_contract to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table dev_kafka.job_operation_log to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table diw_oms_test.pu_contract to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table diw_oms_test.money_in_link to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table mydata.tttt to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table dev_kafka.pu_order_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,804] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_depot to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_organization to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_region to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_order_number to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table dev_kafka.sale_batch to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_material_current_stock to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_user_business to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table dev_kafka.products to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table dev_kafka.yarn_pu_order to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table dev_kafka.sys_dict to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table diw_oms_test.yarn_sales_order_link to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_margin_pay to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_material to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_person to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_account_item to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_depot_item to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table diw_oms_test.button_stock_info to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_material_extend to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table diw_oms_test.yarn_pu_order_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_material_category to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table diw_oms_test.yarn_contract_change to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,805] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_depot_item to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table mydata.debezium_signal to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table diw_oms_test.diw_approval to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_function to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table dev_kafka.diw_batch_basic_information to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table diw_oms_test.pu_contract_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_person to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table diw_oms_test.sale_payment_settlement_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_order_batch to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table dev_kafka.address_book to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_sequence to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table dev_kafka.yarn_sale_contract to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table dev_kafka.pu_contract_valence_of_vertex to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table diw_oms_test.yarn_transfer_company_detail to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table mydata.ttt to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table dev_kafka.contract_center to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table diw_oms_test.tms_sale_order_log to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_material_initial_stock to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_supplier to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table dev_kafka.yarn_pu_contract_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table dev_kafka.yarn_sales_order_update_log to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table mydata.data_type_test to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_tenant to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table diw_oms_test.diw_invoice_detail to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,806] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_order_pay_detail to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table mydata.t_varchar to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table dev_kafka.money_in to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table diw_oms_test.yarn_transfer_main to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table dev_kafka.yarn_sales_order_del to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table dev_kafka.button_contract_change_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table dev_kafka.yarn_sales_order to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_margin_pay_detail to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table mydata.test to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table diw_oms_test.button_contract_change_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_account to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table diw_oms_test.pu_order to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table dev_kafka.deposit_collection_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table diw_oms_test.contract_center to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_unit to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_stock_batch to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table dev_kafka.sale_contract_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table diw_oms_test.payment_settlement to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table dev_kafka.deposit_collection to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table diw_oms_test.contract_center_link to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table dev_kafka.pu_contract_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table diw_oms_test.yarn_sales_order to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,807] INFO [mysql-connector|task-0] Adding table dev_kafka.button_contract_change to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_orga_user_rel to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table dev_kafka.sales_order to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table mydata.stu to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table diw_oms_test.yarn_pu_order to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table mydata.t_type to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table dev_kafka.diw_yarn_batch_info to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_user to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table dev_kafka.yarn_contract_change to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_orga_user_rel to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table dev_kafka.sales_order_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_log to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table dev_kafka.sys_dict_value to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_supplier to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_system_config to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table diw_oms_test.stock_report to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table dev_kafka.diw_invoice_info to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table diw_oms_test.job_operation_log to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_system_config to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table diw_oms_test.yarn_sale_contract_details to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_order_pay to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_unit to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,808] INFO [mysql-connector|task-0] Adding table dev_kafka.price_history to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,809] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_sequence to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,809] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_user to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,809] INFO [mysql-connector|task-0] Adding table dev_kafka.pu_contract to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,809] INFO [mysql-connector|task-0] Adding table diw_oms_test.jsh_platform_config to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,809] INFO [mysql-connector|task-0] Adding table dev_kafka.tms_sale_order to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,809] INFO [mysql-connector|task-0] Adding table diw_oms_test.yarn_pu_contract to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,809] INFO [mysql-connector|task-0] Adding table dev_kafka.jsh_function to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,809] INFO [mysql-connector|task-0] Adding table diw_oms_test.yarn_transfer_detail to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource:350) [2025-05-07 16:32:26,812] INFO [mysql-connector|task-0] Created connection pool with 1 threads (io.debezium.relational.RelationalSnapshotChangeEventSource:236) [2025-05-07 16:32:26,812] INFO [mysql-connector|task-0] Snapshot step 3 - Locking captured tables [dev_kafka.products] (io.debezium.relational.RelationalSnapshotChangeEventSource:153) [2025-05-07 16:32:26,816] INFO [mysql-connector|task-0] Flush and obtain global read lock to prevent writes to database (io.debezium.connector.binlog.BinlogSnapshotChangeEventSource:495) [2025-05-07 16:32:26,836] INFO [mysql-connector|task-0] Snapshot step 4 - Determining snapshot offset (io.debezium.relational.RelationalSnapshotChangeEventSource:159) [2025-05-07 16:32:26,839] INFO [mysql-connector|task-0] Read binlog position of MySQL primary server (io.debezium.connector.mysql.MySqlSnapshotChangeEventSource:58) [2025-05-07 16:32:26,841] INFO [mysql-connector|task-0] using binlog 'mysql-bin.000001' at position '1015' and gtid '' (io.debezium.connector.mysql.MySqlSnapshotChangeEventSource:69) [2025-05-07 16:32:26,841] INFO [mysql-connector|task-0] Snapshot step 5 - Reading structure of captured tables (io.debezium.relational.RelationalSnapshotChangeEventSource:162) [2025-05-07 16:32:26,841] INFO [mysql-connector|task-0] All eligible tables schema should be captured, capturing: [dev_kafka.address_book, dev_kafka.button_contract_change, dev_kafka.button_contract_change_details, dev_kafka.button_stock_info, dev_kafka.contract_center, dev_kafka.contract_center_link, dev_kafka.deposit_collection, dev_kafka.deposit_collection_details, dev_kafka.diw_approval, dev_kafka.diw_batch_basic_information, dev_kafka.diw_invoice_detail, dev_kafka.diw_invoice_info, dev_kafka.diw_reports, dev_kafka.diw_user_tenant, dev_kafka.diw_yarn_batch_info, dev_kafka.job_operation_log, dev_kafka.jsh_account, dev_kafka.jsh_account_head, dev_kafka.jsh_account_item, dev_kafka.jsh_depot, dev_kafka.jsh_depot_head, dev_kafka.jsh_depot_item, dev_kafka.jsh_function, dev_kafka.jsh_in_out_item, dev_kafka.jsh_log, dev_kafka.jsh_margin_batch, dev_kafka.jsh_margin_pay, dev_kafka.jsh_margin_pay_detail, dev_kafka.jsh_material, dev_kafka.jsh_material_attribute, dev_kafka.jsh_material_category, dev_kafka.jsh_material_current_stock, dev_kafka.jsh_material_extend, dev_kafka.jsh_material_initial_stock, dev_kafka.jsh_material_property, dev_kafka.jsh_msg, dev_kafka.jsh_order_batch, dev_kafka.jsh_order_number, dev_kafka.jsh_order_pay, dev_kafka.jsh_order_pay_detail, dev_kafka.jsh_orga_user_rel, dev_kafka.jsh_organization, dev_kafka.jsh_person, dev_kafka.jsh_platform_config, dev_kafka.jsh_region, dev_kafka.jsh_role, dev_kafka.jsh_sequence, dev_kafka.jsh_serial_number, dev_kafka.jsh_stock_batch, dev_kafka.jsh_supplier, dev_kafka.jsh_system_config, dev_kafka.jsh_tenant, dev_kafka.jsh_unit, dev_kafka.jsh_user, dev_kafka.jsh_user_business, dev_kafka.kafka_datasync, dev_kafka.money_in, dev_kafka.money_in_detail, dev_kafka.money_in_link, dev_kafka.payment_settlement, dev_kafka.payment_settlement_details, dev_kafka.price_history, dev_kafka.products, dev_kafka.pu_contract, dev_kafka.pu_contract_details, dev_kafka.pu_contract_valence_of_vertex, dev_kafka.pu_order, dev_kafka.pu_order_details, dev_kafka.sale_batch, dev_kafka.sale_contract, dev_kafka.sale_contract_details, dev_kafka.sale_payment_settlement, dev_kafka.sale_payment_settlement_details, dev_kafka.sales_order, dev_kafka.sales_order_details, dev_kafka.sales_order_link, dev_kafka.stock_report, dev_kafka.sys_dict, dev_kafka.sys_dict_value, dev_kafka.tms_sale_order, dev_kafka.tms_sale_order_log, dev_kafka.yarn_contract_change, dev_kafka.yarn_contract_change_details, dev_kafka.yarn_pu_contract, dev_kafka.yarn_pu_contract_details, dev_kafka.yarn_pu_order, dev_kafka.yarn_pu_order_details, dev_kafka.yarn_sale_batch, dev_kafka.yarn_sale_contract, dev_kafka.yarn_sale_contract_details, dev_kafka.yarn_sales_order, dev_kafka.yarn_sales_order_del, dev_kafka.yarn_sales_order_details, dev_kafka.yarn_sales_order_link, dev_kafka.yarn_sales_order_update_log, dev_kafka.yarn_transfer_company_detail, dev_kafka.yarn_transfer_detail, dev_kafka.yarn_transfer_main, diw_oms_test.address_book, diw_oms_test.button_contract_change, diw_oms_test.button_contract_change_details, diw_oms_test.button_stock_info, diw_oms_test.contract_center, diw_oms_test.contract_center_link, diw_oms_test.deposit_collection, diw_oms_test.deposit_collection_details, diw_oms_test.diw_approval, diw_oms_test.diw_batch_basic_information, diw_oms_test.diw_invoice_detail, diw_oms_test.diw_invoice_info, diw_oms_test.diw_reports, diw_oms_test.diw_user_tenant, diw_oms_test.diw_yarn_batch_info, diw_oms_test.job_operation_log, diw_oms_test.jsh_account, diw_oms_test.jsh_account_head, diw_oms_test.jsh_account_item, diw_oms_test.jsh_depot, diw_oms_test.jsh_depot_head, diw_oms_test.jsh_depot_item, diw_oms_test.jsh_function, diw_oms_test.jsh_in_out_item, diw_oms_test.jsh_log, diw_oms_test.jsh_margin_batch, diw_oms_test.jsh_margin_pay, diw_oms_test.jsh_margin_pay_detail, diw_oms_test.jsh_material, diw_oms_test.jsh_material_attribute, diw_oms_test.jsh_material_category, diw_oms_test.jsh_material_current_stock, diw_oms_test.jsh_material_extend, diw_oms_test.jsh_material_initial_stock, diw_oms_test.jsh_material_property, diw_oms_test.jsh_msg, diw_oms_test.jsh_order_batch, diw_oms_test.jsh_order_number, diw_oms_test.jsh_order_pay, diw_oms_test.jsh_order_pay_detail, diw_oms_test.jsh_orga_user_rel, diw_oms_test.jsh_organization, diw_oms_test.jsh_person, diw_oms_test.jsh_platform_config, diw_oms_test.jsh_region, diw_oms_test.jsh_role, diw_oms_test.jsh_sequence, diw_oms_test.jsh_serial_number, diw_oms_test.jsh_stock_batch, diw_oms_test.jsh_supplier, diw_oms_test.jsh_system_config, diw_oms_test.jsh_tenant, diw_oms_test.jsh_unit, diw_oms_test.jsh_user, diw_oms_test.jsh_user_business, diw_oms_test.kafka_datasync, diw_oms_test.money_in, diw_oms_test.money_in_detail, diw_oms_test.money_in_link, diw_oms_test.payment_settlement, diw_oms_test.payment_settlement_details, diw_oms_test.price_history, diw_oms_test.pu_contract, diw_oms_test.pu_contract_details, diw_oms_test.pu_contract_valence_of_vertex, diw_oms_test.pu_order, diw_oms_test.pu_order_details, diw_oms_test.sale_batch, diw_oms_test.sale_contract, diw_oms_test.sale_contract_details, diw_oms_test.sale_payment_settlement, diw_oms_test.sale_payment_settlement_details, diw_oms_test.sales_order, diw_oms_test.sales_order_details, diw_oms_test.sales_order_link, diw_oms_test.stock_report, diw_oms_test.sys_dict, diw_oms_test.sys_dict_value, diw_oms_test.tms_sale_order, diw_oms_test.tms_sale_order_log, diw_oms_test.yarn_contract_change, diw_oms_test.yarn_contract_change_details, diw_oms_test.yarn_pu_contract, diw_oms_test.yarn_pu_contract_details, diw_oms_test.yarn_pu_order, diw_oms_test.yarn_pu_order_details, diw_oms_test.yarn_sale_batch, diw_oms_test.yarn_sale_contract, diw_oms_test.yarn_sale_contract_details, diw_oms_test.yarn_sales_order, diw_oms_test.yarn_sales_order_del, diw_oms_test.yarn_sales_order_details, diw_oms_test.yarn_sales_order_link, diw_oms_test.yarn_sales_order_update_log, diw_oms_test.yarn_transfer_company_detail, diw_oms_test.yarn_transfer_detail, diw_oms_test.yarn_transfer_main, mydata.data_type_test, mydata.debezium_signal, mydata.stu, mydata.t_type, mydata.t_varchar, mydata.test, mydata.tt, mydata.ttt, mydata.tttt] (io.debezium.connector.binlog.BinlogSnapshotChangeEventSource:314) [2025-05-07 16:32:27,423] INFO [mysql-connector|task-0] Reading structure of database 'dev_kafka' (io.debezium.connector.binlog.BinlogSnapshotChangeEventSource:348) [2025-05-07 16:32:28,848] INFO [mysql-connector|task-0] Reading structure of database 'diw_oms_test' (io.debezium.connector.binlog.BinlogSnapshotChangeEventSource:348) [2025-05-07 16:32:29,372] INFO [mysql-connector|task-0] Reading structure of database 'mydata' (io.debezium.connector.binlog.BinlogSnapshotChangeEventSource:348) [2025-05-07 16:32:29,431] INFO [mysql-connector|task-0] Snapshot step 6 - Persisting schema history (io.debezium.relational.RelationalSnapshotChangeEventSource:166) [2025-05-07 16:32:29,449] INFO [mysql-connector|task-0] Already applied 1 database changes (io.debezium.relational.history.SchemaHistoryMetrics:140) [2025-05-07 16:32:29,952] INFO [mysql-connector|task-0] The task will send records to topic 'OMS' for the first time. Checking whether topic exists (org.apache.kafka.connect.runtime.AbstractWorkerSourceTask:511) [2025-05-07 16:32:29,972] INFO [mysql-connector|task-0] Creating topic 'OMS' (org.apache.kafka.connect.runtime.AbstractWorkerSourceTask:520) [2025-05-07 16:32:30,027] INFO [mysql-connector|task-0] Created topic (name=OMS, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=delete, retention.ms=604800000}) on brokers at localhost:9092 (org.apache.kafka.connect.util.TopicAdmin:416) [2025-05-07 16:32:30,027] INFO [mysql-connector|task-0] Created topic '(name=OMS, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=delete, retention.ms=604800000})' using creation group TopicCreationGroup{name='default', inclusionPattern=.*, exclusionPattern=, numPartitions=1, replicationFactor=1, otherConfigs={cleanup.policy=delete, retention.ms=604800000}} (org.apache.kafka.connect.runtime.AbstractWorkerSourceTask:528) [2025-05-07 16:32:30,225] INFO [mysql-connector|task-0] Already applied 261 database changes (io.debezium.relational.history.SchemaHistoryMetrics:140) [2025-05-07 16:32:30,740] INFO [mysql-connector|task-0] Releasing global read lock to enable MySQL writes (io.debezium.connector.binlog.BinlogSnapshotChangeEventSource:504) [2025-05-07 16:32:30,765] INFO [mysql-connector|task-0] Writes to MySQL tables prevented for a total of 00:00:03.929 (io.debezium.connector.binlog.BinlogSnapshotChangeEventSource:508) [2025-05-07 16:32:30,765] INFO [mysql-connector|task-0] Snapshot step 7 - Snapshotting data (io.debezium.relational.RelationalSnapshotChangeEventSource:178) [2025-05-07 16:32:30,766] INFO [mysql-connector|task-0] Creating snapshot worker pool with 1 worker thread(s) (io.debezium.relational.RelationalSnapshotChangeEventSource:480) [2025-05-07 16:32:30,768] INFO [mysql-connector|task-0] For table 'dev_kafka.products' using select statement: 'SELECT `id`, `number`, `date` FROM `dev_kafka`.`products`' (io.debezium.relational.RelationalSnapshotChangeEventSource:489) [2025-05-07 16:32:30,785] INFO [mysql-connector|task-0] Estimated row count for table dev_kafka.products is OptionalLong[1] (io.debezium.connector.binlog.BinlogSnapshotChangeEventSource:561) [2025-05-07 16:32:30,789] INFO [mysql-connector|task-0] Exporting data from table 'dev_kafka.products' (1 of 1 tables) (io.debezium.relational.RelationalSnapshotChangeEventSource:614) [2025-05-07 16:32:30,882] INFO [mysql-connector|task-0] Finished exporting 2 records for table 'dev_kafka.products' (1 of 1 tables); total duration '00:00:00.093' (io.debezium.relational.RelationalSnapshotChangeEventSource:660) [2025-05-07 16:32:30,886] INFO [mysql-connector|task-0] Snapshot - Final stage (io.debezium.pipeline.source.AbstractSnapshotChangeEventSource:108) [2025-05-07 16:32:30,886] INFO [mysql-connector|task-0] Snapshot completed (io.debezium.pipeline.source.AbstractSnapshotChangeEventSource:112) [2025-05-07 16:32:30,933] INFO [mysql-connector|task-0] Snapshot ended with SnapshotResult [status=COMPLETED, offset=BinlogOffsetContext{sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, sourceInfo=BinlogSourceInfo{currentGtid='null', currentBinlogFilename='mysql-bin.000001', currentBinlogPosition=1015, currentRowNumber=0, serverId=0, sourceTime=2025-05-07T21:32:30Z, threadId=-1, currentQuery='null', tableIds=[dev_kafka.products], databaseName='mydata'}, snapshotCompleted=true, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0], restartGtidSet='null', currentGtidSet='null', restartBinlogFilename='mysql-bin.000001', restartBinlogPosition=1015, restartRowsToSkip=0, restartEventsToSkip=0, currentEventLengthInBytes=0, inTransaction=false, transactionId='null', incrementalSnapshotContext=IncrementalSnapshotContext [windowOpened=false, chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, maximumKey=null]}] (io.debezium.pipeline.ChangeEventSourceCoordinator:298) [2025-05-07 16:32:30,944] INFO [mysql-connector|task-0] Requested thread factory for component MySqlConnector, id = OMS named = binlog-client (io.debezium.util.Threads:270) [2025-05-07 16:32:30,947] INFO [mysql-connector|task-0] Enable ssl PREFERRED mode for connector OMS (io.debezium.connector.binlog.BinlogStreamingChangeEventSource:1275) [2025-05-07 16:32:30,959] INFO [mysql-connector|task-0] No incremental snapshot in progress, no action needed on start (io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource:228) [2025-05-07 16:32:30,968] INFO [mysql-connector|task-0] SignalProcessor started. Scheduling it every 5000ms (io.debezium.pipeline.signal.SignalProcessor:105) [2025-05-07 16:32:30,969] INFO [mysql-connector|task-0] Creating thread debezium-mysqlconnector-OMS-SignalProcessor (io.debezium.util.Threads:287) [2025-05-07 16:32:30,980] INFO [mysql-connector|task-0] Starting streaming (io.debezium.pipeline.ChangeEventSourceCoordinator:323) [2025-05-07 16:32:30,986] INFO [mysql-connector|task-0] Skip 0 events on streaming start (io.debezium.connector.binlog.BinlogStreamingChangeEventSource:278) [2025-05-07 16:32:30,986] INFO [mysql-connector|task-0] Skip 0 rows on streaming start (io.debezium.connector.binlog.BinlogStreamingChangeEventSource:282) [2025-05-07 16:32:30,987] INFO [mysql-connector|task-0] Creating thread debezium-mysqlconnector-OMS-binlog-client (io.debezium.util.Threads:287) [2025-05-07 16:32:30,990] INFO [mysql-connector|task-0] Creating thread debezium-mysqlconnector-OMS-binlog-client (io.debezium.util.Threads:287) [2025-05-07 16:32:31,098] INFO [mysql-connector|task-0] Connected to binlog at localhost:3306, starting at BinlogOffsetContext{sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, sourceInfo=BinlogSourceInfo{currentGtid='null', currentBinlogFilename='mysql-bin.000001', currentBinlogPosition=1015, currentRowNumber=0, serverId=0, sourceTime=2025-05-07T21:32:30Z, threadId=-1, currentQuery='null', tableIds=[dev_kafka.products], databaseName='mydata'}, snapshotCompleted=true, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0], restartGtidSet='null', currentGtidSet='null', restartBinlogFilename='mysql-bin.000001', restartBinlogPosition=1015, restartRowsToSkip=0, restartEventsToSkip=0, currentEventLengthInBytes=0, inTransaction=false, transactionId='null', incrementalSnapshotContext=IncrementalSnapshotContext [windowOpened=false, chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, maximumKey=null]} (io.debezium.connector.binlog.BinlogStreamingChangeEventSource:1232) [2025-05-07 16:32:31,099] INFO [mysql-connector|task-0] Waiting for keepalive thread to start (io.debezium.connector.binlog.BinlogStreamingChangeEventSource:299) [2025-05-07 16:32:31,101] INFO [mysql-connector|task-0] Creating thread debezium-mysqlconnector-OMS-binlog-client (io.debezium.util.Threads:287) [2025-05-07 16:32:31,200] INFO [mysql-connector|task-0] Keepalive thread is running (io.debezium.connector.binlog.BinlogStreamingChangeEventSource:306) [2025-05-07 16:32:31,338] INFO [mysql-connector|task-0] 420 records sent during previous 00:00:05.234, last recorded offset of {server=OMS} partition is {ts_sec=1746653550, file=mysql-bin.000001, pos=1015} (io.debezium.connector.common.BaseSourceTask:354) [2025-05-07 16:32:31,340] INFO [mysql-connector|task-0] The task will send records to topic 'OMS.dev_kafka.products' for the first time. Checking whether topic exists (org.apache.kafka.connect.runtime.AbstractWorkerSourceTask:511) [2025-05-07 16:32:31,342] INFO [mysql-connector|task-0] Creating topic 'OMS.dev_kafka.products' (org.apache.kafka.connect.runtime.AbstractWorkerSourceTask:520) [2025-05-07 16:32:31,373] INFO [mysql-connector|task-0] Created topic (name=OMS.dev_kafka.products, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={}) on brokers at localhost:9092 (org.apache.kafka.connect.util.TopicAdmin:416) [2025-05-07 16:32:31,374] INFO [mysql-connector|task-0] Created topic '(name=OMS.dev_kafka.products, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={})' using creation group TopicCreationGroup{name='dev_kafka_group', inclusionPattern=OMS.dev_kafka.*, exclusionPattern=, numPartitions=1, replicationFactor=1, otherConfigs={}} (org.apache.kafka.connect.runtime.AbstractWorkerSourceTask:528) [2025-05-07 16:32:36,132] INFO [mysql-connector|task-0|offsets] WorkerSourceTask{id=mysql-connector-0} Committing offsets for 420 acknowledged messages (org.apache.kafka.connect.runtime.WorkerSourceTask:236) [2025-05-07 16:32:47,340] INFO AbstractConfig values: (org.apache.kafka.common.config.AbstractConfig:371) [2025-05-07 16:32:47,346] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Connector oracle-sink config updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2448) [2025-05-07 16:32:47,349] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Rebalance started (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:243) [2025-05-07 16:32:47,349] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] (Re-)joining group (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:605) [2025-05-07 16:32:47,350] INFO [0:0:0:0:0:0:0:1] - - [07/May/2025:08:32:47 +0000] "POST /connectors HTTP/1.1" 201 1933 "-" "curl/7.29.0" 28 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-05-07 16:32:47,351] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Successfully joined group with generation Generation{generationId=4, memberId='connect-99.12.11.33:8083-ce169dfa-416b-4d0d-abd5-8b92fe0e128c', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:666) [2025-05-07 16:32:47,355] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Successfully synced group in generation Generation{generationId=4, memberId='connect-99.12.11.33:8083-ce169dfa-416b-4d0d-abd5-8b92fe0e128c', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:843) [2025-05-07 16:32:47,355] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Joined group at generation 4 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-99.12.11.33:8083-ce169dfa-416b-4d0d-abd5-8b92fe0e128c', leaderUrl='http://99.12.11.33:8083/', offset=5, connectorIds=[oracle-sink, mysql-connector], taskIds=[mysql-connector-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2648) [2025-05-07 16:32:47,356] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Starting connectors and tasks using config offset 5 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1979) [2025-05-07 16:32:47,356] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Starting connector oracle-sink (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2097) [2025-05-07 16:32:47,356] INFO [oracle-sink|worker] Creating connector oracle-sink of type io.debezium.connector.jdbc.JdbcSinkConnector (org.apache.kafka.connect.runtime.Worker:313) [2025-05-07 16:32:47,357] INFO [oracle-sink|worker] SinkConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 30000 errors.retry.timeout = 86400000 errors.tolerance = none header.converter = null key.converter = null name = oracle-sink predicates = [] tasks.max = 2 tasks.max.enforce = true topics = [] topics.regex = OMS.dev_kafka.*|BIP.YONBIPV3.* transforms = [schema-uppercase, tablename-uppercase] value.converter = null (org.apache.kafka.connect.runtime.SinkConnectorConfig:371) [2025-05-07 16:32:47,357] INFO [oracle-sink|worker] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 30000 errors.retry.timeout = 86400000 errors.tolerance = none header.converter = null key.converter = null name = oracle-sink predicates = [] tasks.max = 2 tasks.max.enforce = true topics = [] topics.regex = OMS.dev_kafka.*|BIP.YONBIPV3.* transforms = [schema-uppercase, tablename-uppercase] transforms.schema-uppercase.collection.naming.prefix = transforms.schema-uppercase.collection.naming.style = upper_case transforms.schema-uppercase.collection.naming.suffix = transforms.schema-uppercase.negate = false transforms.schema-uppercase.predicate = null transforms.schema-uppercase.type = class io.debezium.connector.jdbc.transforms.CollectionNameTransformation transforms.tablename-uppercase.column.naming.prefix = transforms.tablename-uppercase.column.naming.style = upper_case transforms.tablename-uppercase.column.naming.suffix = transforms.tablename-uppercase.negate = false transforms.tablename-uppercase.predicate = null transforms.tablename-uppercase.type = class io.debezium.connector.jdbc.transforms.FieldNameTransformation value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:371) [2025-05-07 16:32:47,358] INFO [oracle-sink|worker] Instantiated connector oracle-sink with version 3.2.0.Alpha1 of type class io.debezium.connector.jdbc.JdbcSinkConnector (org.apache.kafka.connect.runtime.Worker:335) [2025-05-07 16:32:47,358] INFO [oracle-sink|worker] Finished creating connector oracle-sink (org.apache.kafka.connect.runtime.Worker:356) [2025-05-07 16:32:47,358] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2008) [2025-05-07 16:32:47,360] INFO SinkConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 30000 errors.retry.timeout = 86400000 errors.tolerance = none header.converter = null key.converter = null name = oracle-sink predicates = [] tasks.max = 2 tasks.max.enforce = true topics = [] topics.regex = OMS.dev_kafka.*|BIP.YONBIPV3.* transforms = [schema-uppercase, tablename-uppercase] value.converter = null (org.apache.kafka.connect.runtime.SinkConnectorConfig:371) [2025-05-07 16:32:47,361] INFO EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 30000 errors.retry.timeout = 86400000 errors.tolerance = none header.converter = null key.converter = null name = oracle-sink predicates = [] tasks.max = 2 tasks.max.enforce = true topics = [] topics.regex = OMS.dev_kafka.*|BIP.YONBIPV3.* transforms = [schema-uppercase, tablename-uppercase] transforms.schema-uppercase.collection.naming.prefix = transforms.schema-uppercase.collection.naming.style = upper_case transforms.schema-uppercase.collection.naming.suffix = transforms.schema-uppercase.negate = false transforms.schema-uppercase.predicate = null transforms.schema-uppercase.type = class io.debezium.connector.jdbc.transforms.CollectionNameTransformation transforms.tablename-uppercase.column.naming.prefix = transforms.tablename-uppercase.column.naming.style = upper_case transforms.tablename-uppercase.column.naming.suffix = transforms.tablename-uppercase.negate = false transforms.tablename-uppercase.predicate = null transforms.tablename-uppercase.type = class io.debezium.connector.jdbc.transforms.FieldNameTransformation value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:371) [2025-05-07 16:32:47,376] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Tasks [oracle-sink-1, oracle-sink-0] configs updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2467) [2025-05-07 16:32:47,377] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Rebalance started (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:243) [2025-05-07 16:32:47,377] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] (Re-)joining group (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:605) [2025-05-07 16:32:47,380] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Successfully joined group with generation Generation{generationId=5, memberId='connect-99.12.11.33:8083-ce169dfa-416b-4d0d-abd5-8b92fe0e128c', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:666) [2025-05-07 16:32:47,384] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Successfully synced group in generation Generation{generationId=5, memberId='connect-99.12.11.33:8083-ce169dfa-416b-4d0d-abd5-8b92fe0e128c', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:843) [2025-05-07 16:32:47,384] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Joined group at generation 5 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-99.12.11.33:8083-ce169dfa-416b-4d0d-abd5-8b92fe0e128c', leaderUrl='http://99.12.11.33:8083/', offset=8, connectorIds=[oracle-sink, mysql-connector], taskIds=[oracle-sink-0, oracle-sink-1, mysql-connector-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2648) [2025-05-07 16:32:47,384] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Starting connectors and tasks using config offset 8 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1979) [2025-05-07 16:32:47,385] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Starting task oracle-sink-1 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2022) [2025-05-07 16:32:47,387] INFO [oracle-sink|task-1] Creating task oracle-sink-1 (org.apache.kafka.connect.runtime.Worker:646) [2025-05-07 16:32:47,387] INFO [oracle-sink|task-1] ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 30000 errors.retry.timeout = 86400000 errors.tolerance = none header.converter = null key.converter = null name = oracle-sink predicates = [] tasks.max = 2 tasks.max.enforce = true transforms = [schema-uppercase, tablename-uppercase] value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig:371) [2025-05-07 16:32:47,388] INFO [oracle-sink|task-1] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 30000 errors.retry.timeout = 86400000 errors.tolerance = none header.converter = null key.converter = null name = oracle-sink predicates = [] tasks.max = 2 tasks.max.enforce = true transforms = [schema-uppercase, tablename-uppercase] transforms.schema-uppercase.collection.naming.prefix = transforms.schema-uppercase.collection.naming.style = upper_case transforms.schema-uppercase.collection.naming.suffix = transforms.schema-uppercase.negate = false transforms.schema-uppercase.predicate = null transforms.schema-uppercase.type = class io.debezium.connector.jdbc.transforms.CollectionNameTransformation transforms.tablename-uppercase.column.naming.prefix = transforms.tablename-uppercase.column.naming.style = upper_case transforms.tablename-uppercase.column.naming.suffix = transforms.tablename-uppercase.negate = false transforms.tablename-uppercase.predicate = null transforms.tablename-uppercase.type = class io.debezium.connector.jdbc.transforms.FieldNameTransformation value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:371) [2025-05-07 16:32:47,389] INFO [oracle-sink|task-1] TaskConfig values: task.class = class io.debezium.connector.jdbc.JdbcSinkConnectorTask (org.apache.kafka.connect.runtime.TaskConfig:371) [2025-05-07 16:32:47,390] INFO [oracle-sink|task-1] New InternalSinkRecord class found (io.debezium.connector.jdbc.JdbcSinkConnectorTask:75) [2025-05-07 16:32:47,390] INFO [oracle-sink|task-1] Instantiated task oracle-sink-1 with version 3.2.0.Alpha1 of type io.debezium.connector.jdbc.JdbcSinkConnectorTask (org.apache.kafka.connect.runtime.Worker:665) [2025-05-07 16:32:47,390] INFO [oracle-sink|task-1] JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true (org.apache.kafka.connect.json.JsonConverterConfig:371) [2025-05-07 16:32:47,391] INFO [oracle-sink|task-1] Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task oracle-sink-1 using the worker config (org.apache.kafka.connect.runtime.Worker:678) [2025-05-07 16:32:47,391] INFO [oracle-sink|task-1] JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true (org.apache.kafka.connect.json.JsonConverterConfig:371) [2025-05-07 16:32:47,391] INFO [oracle-sink|task-1] Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task oracle-sink-1 using the worker config (org.apache.kafka.connect.runtime.Worker:684) [2025-05-07 16:32:47,391] INFO [oracle-sink|task-1] Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task oracle-sink-1 using the worker config (org.apache.kafka.connect.runtime.Worker:691) [2025-05-07 16:32:47,393] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Starting task oracle-sink-0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2022) [2025-05-07 16:32:47,393] INFO [oracle-sink|task-0] Creating task oracle-sink-0 (org.apache.kafka.connect.runtime.Worker:646) [2025-05-07 16:32:47,394] INFO [oracle-sink|task-0] ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 30000 errors.retry.timeout = 86400000 errors.tolerance = none header.converter = null key.converter = null name = oracle-sink predicates = [] tasks.max = 2 tasks.max.enforce = true transforms = [schema-uppercase, tablename-uppercase] value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig:371) [2025-05-07 16:32:47,394] INFO [oracle-sink|task-0] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 30000 errors.retry.timeout = 86400000 errors.tolerance = none header.converter = null key.converter = null name = oracle-sink predicates = [] tasks.max = 2 tasks.max.enforce = true transforms = [schema-uppercase, tablename-uppercase] transforms.schema-uppercase.collection.naming.prefix = transforms.schema-uppercase.collection.naming.style = upper_case transforms.schema-uppercase.collection.naming.suffix = transforms.schema-uppercase.negate = false transforms.schema-uppercase.predicate = null transforms.schema-uppercase.type = class io.debezium.connector.jdbc.transforms.CollectionNameTransformation transforms.tablename-uppercase.column.naming.prefix = transforms.tablename-uppercase.column.naming.style = upper_case transforms.tablename-uppercase.column.naming.suffix = transforms.tablename-uppercase.negate = false transforms.tablename-uppercase.predicate = null transforms.tablename-uppercase.type = class io.debezium.connector.jdbc.transforms.FieldNameTransformation value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:371) [2025-05-07 16:32:47,395] INFO [oracle-sink|task-0] TaskConfig values: task.class = class io.debezium.connector.jdbc.JdbcSinkConnectorTask (org.apache.kafka.connect.runtime.TaskConfig:371) [2025-05-07 16:32:47,395] INFO [oracle-sink|task-0] New InternalSinkRecord class found (io.debezium.connector.jdbc.JdbcSinkConnectorTask:75) [2025-05-07 16:32:47,395] INFO [oracle-sink|task-0] Instantiated task oracle-sink-0 with version 3.2.0.Alpha1 of type io.debezium.connector.jdbc.JdbcSinkConnectorTask (org.apache.kafka.connect.runtime.Worker:665) [2025-05-07 16:32:47,395] INFO [oracle-sink|task-0] JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true (org.apache.kafka.connect.json.JsonConverterConfig:371) [2025-05-07 16:32:47,395] INFO [oracle-sink|task-0] Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task oracle-sink-0 using the worker config (org.apache.kafka.connect.runtime.Worker:678) [2025-05-07 16:32:47,395] INFO [oracle-sink|task-0] JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true (org.apache.kafka.connect.json.JsonConverterConfig:371) [2025-05-07 16:32:47,396] INFO [oracle-sink|task-0] Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task oracle-sink-0 using the worker config (org.apache.kafka.connect.runtime.Worker:684) [2025-05-07 16:32:47,396] INFO [oracle-sink|task-0] Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task oracle-sink-0 using the worker config (org.apache.kafka.connect.runtime.Worker:691) [2025-05-07 16:32:47,397] INFO [oracle-sink|task-1] Configured with prefix='', suffix='', naming style='UPPER_CASE' (io.debezium.connector.jdbc.transforms.CollectionNameTransformation:81) [2025-05-07 16:32:47,399] INFO [oracle-sink|task-1] Configured with prefix='', suffix='', naming style='UPPER_CASE' (io.debezium.connector.jdbc.transforms.FieldNameTransformation:97) [2025-05-07 16:32:47,399] INFO [oracle-sink|task-1] Initializing: org.apache.kafka.connect.runtime.TransformationChain{io.debezium.connector.jdbc.transforms.CollectionNameTransformation, io.debezium.connector.jdbc.transforms.FieldNameTransformation} (org.apache.kafka.connect.runtime.Worker:1795) [2025-05-07 16:32:47,399] INFO [oracle-sink|task-1] SinkConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 30000 errors.retry.timeout = 86400000 errors.tolerance = none header.converter = null key.converter = null name = oracle-sink predicates = [] tasks.max = 2 tasks.max.enforce = true topics = [] topics.regex = OMS.dev_kafka.*|BIP.YONBIPV3.* transforms = [schema-uppercase, tablename-uppercase] value.converter = null (org.apache.kafka.connect.runtime.SinkConnectorConfig:371) [2025-05-07 16:32:47,400] INFO [oracle-sink|task-1] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 30000 errors.retry.timeout = 86400000 errors.tolerance = none header.converter = null key.converter = null name = oracle-sink predicates = [] tasks.max = 2 tasks.max.enforce = true topics = [] topics.regex = OMS.dev_kafka.*|BIP.YONBIPV3.* transforms = [schema-uppercase, tablename-uppercase] transforms.schema-uppercase.collection.naming.prefix = transforms.schema-uppercase.collection.naming.style = upper_case transforms.schema-uppercase.collection.naming.suffix = transforms.schema-uppercase.negate = false transforms.schema-uppercase.predicate = null transforms.schema-uppercase.type = class io.debezium.connector.jdbc.transforms.CollectionNameTransformation transforms.tablename-uppercase.column.naming.prefix = transforms.tablename-uppercase.column.naming.style = upper_case transforms.tablename-uppercase.column.naming.suffix = transforms.tablename-uppercase.negate = false transforms.tablename-uppercase.predicate = null transforms.tablename-uppercase.type = class io.debezium.connector.jdbc.transforms.FieldNameTransformation value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:371) [2025-05-07 16:32:47,401] INFO [oracle-sink|task-1] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [localhost:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = connector-consumer-oracle-sink-1 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = connect-oracle-sink group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 100 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:371) [2025-05-07 16:32:47,402] INFO [oracle-sink|task-1] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:270) [2025-05-07 16:32:47,403] INFO [oracle-sink|task-0] Configured with prefix='', suffix='', naming style='UPPER_CASE' (io.debezium.connector.jdbc.transforms.CollectionNameTransformation:81) [2025-05-07 16:32:47,403] INFO [oracle-sink|task-0] Configured with prefix='', suffix='', naming style='UPPER_CASE' (io.debezium.connector.jdbc.transforms.FieldNameTransformation:97) [2025-05-07 16:32:47,403] INFO [oracle-sink|task-0] Initializing: org.apache.kafka.connect.runtime.TransformationChain{io.debezium.connector.jdbc.transforms.CollectionNameTransformation, io.debezium.connector.jdbc.transforms.FieldNameTransformation} (org.apache.kafka.connect.runtime.Worker:1795) [2025-05-07 16:32:47,404] INFO [oracle-sink|task-0] SinkConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 30000 errors.retry.timeout = 86400000 errors.tolerance = none header.converter = null key.converter = null name = oracle-sink predicates = [] tasks.max = 2 tasks.max.enforce = true topics = [] topics.regex = OMS.dev_kafka.*|BIP.YONBIPV3.* transforms = [schema-uppercase, tablename-uppercase] value.converter = null (org.apache.kafka.connect.runtime.SinkConnectorConfig:371) [2025-05-07 16:32:47,404] INFO [oracle-sink|task-0] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 30000 errors.retry.timeout = 86400000 errors.tolerance = none header.converter = null key.converter = null name = oracle-sink predicates = [] tasks.max = 2 tasks.max.enforce = true topics = [] topics.regex = OMS.dev_kafka.*|BIP.YONBIPV3.* transforms = [schema-uppercase, tablename-uppercase] transforms.schema-uppercase.collection.naming.prefix = transforms.schema-uppercase.collection.naming.style = upper_case transforms.schema-uppercase.collection.naming.suffix = transforms.schema-uppercase.negate = false transforms.schema-uppercase.predicate = null transforms.schema-uppercase.type = class io.debezium.connector.jdbc.transforms.CollectionNameTransformation transforms.tablename-uppercase.column.naming.prefix = transforms.tablename-uppercase.column.naming.style = upper_case transforms.tablename-uppercase.column.naming.suffix = transforms.tablename-uppercase.negate = false transforms.tablename-uppercase.predicate = null transforms.tablename-uppercase.type = class io.debezium.connector.jdbc.transforms.FieldNameTransformation value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:371) [2025-05-07 16:32:47,405] INFO [oracle-sink|task-0] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [localhost:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = connector-consumer-oracle-sink-0 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = connect-oracle-sink group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 100 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:371) [2025-05-07 16:32:47,405] INFO [oracle-sink|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:270) [2025-05-07 16:32:47,406] INFO [oracle-sink|task-1] These configurations '[metrics.context.connect.group.id, metrics.context.connect.kafka.cluster.id]' were supplied but are not used yet. (org.apache.kafka.clients.consumer.ConsumerConfig:380) [2025-05-07 16:32:47,406] INFO [oracle-sink|task-1] Kafka version: 3.9.0 (org.apache.kafka.common.utils.AppInfoParser:125) [2025-05-07 16:32:47,406] INFO [oracle-sink|task-1] Kafka commitId: a60e31147e6b01ee (org.apache.kafka.common.utils.AppInfoParser:126) [2025-05-07 16:32:47,406] INFO [oracle-sink|task-1] Kafka startTimeMs: 1746606767406 (org.apache.kafka.common.utils.AppInfoParser:127) [2025-05-07 16:32:47,408] INFO [oracle-sink|task-0] These configurations '[metrics.context.connect.group.id, metrics.context.connect.kafka.cluster.id]' were supplied but are not used yet. (org.apache.kafka.clients.consumer.ConsumerConfig:380) [2025-05-07 16:32:47,408] INFO [oracle-sink|task-0] Kafka version: 3.9.0 (org.apache.kafka.common.utils.AppInfoParser:125) [2025-05-07 16:32:47,408] INFO [oracle-sink|task-0] Kafka commitId: a60e31147e6b01ee (org.apache.kafka.common.utils.AppInfoParser:126) [2025-05-07 16:32:47,408] INFO [oracle-sink|task-0] Kafka startTimeMs: 1746606767408 (org.apache.kafka.common.utils.AppInfoParser:127) [2025-05-07 16:32:47,420] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Subscribed to pattern: 'OMS.dev_kafka.*|BIP.YONBIPV3.*' (org.apache.kafka.clients.consumer.internals.ClassicKafkaConsumer:534) [2025-05-07 16:32:47,426] INFO [oracle-sink|task-1] Starting JdbcSinkConnectorConfig with configuration: (io.debezium.connector.jdbc.JdbcSinkConnectorTask:436) [2025-05-07 16:32:47,427] INFO [oracle-sink|task-1] connector.class = io.debezium.connector.jdbc.JdbcSinkConnector (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,427] INFO [oracle-sink|task-1] errors.log.include.messages = true (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,427] INFO [oracle-sink|task-1] hibernate.c3p0.idle_test_period = 300 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,427] INFO [oracle-sink|task-1] transforms = schema-uppercase,tablename-uppercase (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,427] INFO [oracle-sink|task-1] max.retries = 3 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,427] INFO [oracle-sink|task-1] quote.identifiers = true (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,427] INFO [oracle-sink|task-1] varchar.mapping = varchar2 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,427] INFO [oracle-sink|task-1] log.mining.query.filter = DEBUG (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,427] INFO [oracle-sink|task-1] errors.log.enable = true (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,427] INFO [oracle-sink|task-1] insert.mode = upsert (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,427] INFO [oracle-sink|task-1] errors.retry.timeout = 86400000 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,427] INFO [oracle-sink|task-1] primary.key.mode = record_key (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,427] INFO [oracle-sink|task-1] auto.commit.interval.ms = 1000 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,427] INFO [oracle-sink|task-1] topics.regex = OMS.dev_kafka.*|BIP.YONBIPV3.* (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,427] INFO [oracle-sink|task-1] auto.commit = true (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,427] INFO [oracle-sink|task-1] database.server.time_zone = Asia/Shanghai (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,427] INFO [oracle-sink|task-1] connection.pool.max_size = 5 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,428] INFO [oracle-sink|task-1] delete.enabled = true (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,428] INFO [oracle-sink|task-1] transforms.tablename-uppercase.type = io.debezium.connector.jdbc.transforms.FieldNameTransformation (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,428] INFO [oracle-sink|task-1] name = oracle-sink (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,428] INFO [oracle-sink|task-1] errors.tolerance = none (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,428] INFO [oracle-sink|task-1] numeric.mapping = best_fit (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,428] INFO [oracle-sink|task-1] auto.create = false (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,428] INFO [oracle-sink|task-1] connection.password = ******** (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,428] INFO [oracle-sink|task-1] tasks.max = 2 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,428] INFO [oracle-sink|task-1] transforms.schema-uppercase.type = io.debezium.connector.jdbc.transforms.CollectionNameTransformation (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,428] INFO [oracle-sink|task-1] database.tablename.case.insensitive = false (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,428] INFO [oracle-sink|task-1] retry.backoff.ms = 1000 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,428] INFO [oracle-sink|task-1] consumer.override.fetch.max.wait.ms = 500 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] collection.name.format = ${source.name}.${source.table} (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] decimal.handling.mode = double (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] parallel.transaction.threshold = 100 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] schema.evolution = basic (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] auto.evolve = false (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] consumer.override.max.poll.records = 100 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] transforms.schema-uppercase.collection.naming.style = upper_case (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] log.connector = DEBUG (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] log.mining.transaction.retention.hours = 1 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] batch.size = 500 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] consumer.override.max.poll.interval.ms = 300000 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] time.precision.mode = connect (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] connection.username = kafka (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] errors.retry.delay.max.ms = 30000 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] consumer.override.fetch.min.bytes = 1 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] offset.flush.interval.ms = 10000 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] task.class = io.debezium.connector.jdbc.JdbcSinkConnectorTask (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] transforms.tablename-uppercase.column.naming.style = upper_case (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] use.time_zone = UTC (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] connection.url = jdbc:oracle:thin:@99.12.11.32:1521/youbip07 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,429] INFO [oracle-sink|task-1] max.parallel.transactions = 8 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,444] INFO [Worker clientId=connect-99.12.11.33:8083, groupId=connect-cluster] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2008) [2025-05-07 16:32:47,452] INFO [oracle-sink|task-0] [Consumer clientId=connector-consumer-oracle-sink-0, groupId=connect-oracle-sink] Subscribed to pattern: 'OMS.dev_kafka.*|BIP.YONBIPV3.*' (org.apache.kafka.clients.consumer.internals.ClassicKafkaConsumer:534) [2025-05-07 16:32:47,452] INFO [oracle-sink|task-0] Starting JdbcSinkConnectorConfig with configuration: (io.debezium.connector.jdbc.JdbcSinkConnectorTask:436) [2025-05-07 16:32:47,452] INFO [oracle-sink|task-0] connector.class = io.debezium.connector.jdbc.JdbcSinkConnector (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] errors.log.include.messages = true (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] hibernate.c3p0.idle_test_period = 300 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] transforms = schema-uppercase,tablename-uppercase (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] max.retries = 3 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] quote.identifiers = true (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] varchar.mapping = varchar2 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] log.mining.query.filter = DEBUG (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] errors.log.enable = true (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] insert.mode = upsert (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] errors.retry.timeout = 86400000 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] primary.key.mode = record_key (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] auto.commit.interval.ms = 1000 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] topics.regex = OMS.dev_kafka.*|BIP.YONBIPV3.* (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] auto.commit = true (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] database.server.time_zone = Asia/Shanghai (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] connection.pool.max_size = 5 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] delete.enabled = true (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] transforms.tablename-uppercase.type = io.debezium.connector.jdbc.transforms.FieldNameTransformation (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] name = oracle-sink (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] errors.tolerance = none (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] numeric.mapping = best_fit (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] auto.create = false (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] connection.password = ******** (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] tasks.max = 2 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] transforms.schema-uppercase.type = io.debezium.connector.jdbc.transforms.CollectionNameTransformation (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] database.tablename.case.insensitive = false (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] retry.backoff.ms = 1000 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,453] INFO [oracle-sink|task-0] consumer.override.fetch.max.wait.ms = 500 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] collection.name.format = ${source.name}.${source.table} (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] decimal.handling.mode = double (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] parallel.transaction.threshold = 100 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] schema.evolution = basic (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] auto.evolve = false (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] consumer.override.max.poll.records = 100 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] transforms.schema-uppercase.collection.naming.style = upper_case (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] log.connector = DEBUG (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] log.mining.transaction.retention.hours = 1 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] batch.size = 500 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] consumer.override.max.poll.interval.ms = 300000 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] time.precision.mode = connect (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] connection.username = kafka (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] errors.retry.delay.max.ms = 30000 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] consumer.override.fetch.min.bytes = 1 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] offset.flush.interval.ms = 10000 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] task.class = io.debezium.connector.jdbc.JdbcSinkConnectorTask (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] transforms.tablename-uppercase.column.naming.style = upper_case (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] use.time_zone = UTC (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] connection.url = jdbc:oracle:thin:@99.12.11.32:1521/youbip07 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,454] INFO [oracle-sink|task-0] max.parallel.transactions = 8 (io.debezium.connector.jdbc.JdbcSinkConnectorTask:438) [2025-05-07 16:32:47,494] INFO [oracle-sink|task-1] HHH000412: Hibernate ORM core version 6.4.8.Final (org.hibernate.Version:44) [2025-05-07 16:32:47,544] INFO [oracle-sink|task-0] HHH000026: Second-level cache disabled (org.hibernate.cache.internal.RegionFactoryInitiator:50) [2025-05-07 16:32:47,555] INFO [oracle-sink|task-1] HHH000026: Second-level cache disabled (org.hibernate.cache.internal.RegionFactoryInitiator:50) [2025-05-07 16:32:47,643] INFO [oracle-sink|task-1] HHH000130: Instantiating explicit connection provider: org.hibernate.c3p0.internal.C3P0ConnectionProvider (org.hibernate.engine.jdbc.connections.internal.ConnectionProviderInitiator:131) [2025-05-07 16:32:47,645] INFO [oracle-sink|task-1] HHH010002: C3P0 using driver: null at URL: jdbc:oracle:thin:@99.12.11.32:1521/youbip07 (org.hibernate.orm.connections.pooling.c3p0:124) [2025-05-07 16:32:47,646] INFO [oracle-sink|task-1] HHH10001001: Connection properties: {password=****, user=kafka} (org.hibernate.orm.connections.pooling.c3p0:125) [2025-05-07 16:32:47,646] INFO [oracle-sink|task-1] HHH10001003: Autocommit mode: false (org.hibernate.orm.connections.pooling.c3p0:128) [2025-05-07 16:32:47,646] WARN [oracle-sink|task-1] HHH10001006: No JDBC Driver class was specified by property `jakarta.persistence.jdbc.driver`, `hibernate.driver` or `javax.persistence.jdbc.driver` (org.hibernate.orm.connections.pooling.c3p0:131) [2025-05-07 16:32:47,653] INFO [oracle-sink|task-0] HHH000130: Instantiating explicit connection provider: org.hibernate.c3p0.internal.C3P0ConnectionProvider (org.hibernate.engine.jdbc.connections.internal.ConnectionProviderInitiator:131) [2025-05-07 16:32:47,656] INFO [oracle-sink|task-1] MLog clients using slf4j logging. (com.mchange.v2.log.MLog:212) [2025-05-07 16:32:47,663] INFO [oracle-sink|task-0] HHH010002: C3P0 using driver: null at URL: jdbc:oracle:thin:@99.12.11.32:1521/youbip07 (org.hibernate.orm.connections.pooling.c3p0:124) [2025-05-07 16:32:47,663] INFO [oracle-sink|task-0] HHH10001001: Connection properties: {password=****, user=kafka} (org.hibernate.orm.connections.pooling.c3p0:125) [2025-05-07 16:32:47,664] INFO [oracle-sink|task-0] HHH10001003: Autocommit mode: false (org.hibernate.orm.connections.pooling.c3p0:128) [2025-05-07 16:32:47,664] WARN [oracle-sink|task-0] HHH10001006: No JDBC Driver class was specified by property `jakarta.persistence.jdbc.driver`, `hibernate.driver` or `javax.persistence.jdbc.driver` (org.hibernate.orm.connections.pooling.c3p0:131) [2025-05-07 16:32:47,702] INFO [oracle-sink|task-1] Initializing c3p0-0.9.5.5 [built 11-December-2019 22:18:33 -0800; debug? true; trace: 10] (com.mchange.v2.c3p0.C3P0Registry:212) [2025-05-07 16:32:47,758] INFO [oracle-sink|task-0] HHH10001007: JDBC isolation level: (org.hibernate.orm.connections.pooling.c3p0:200) [2025-05-07 16:32:47,777] INFO [oracle-sink|task-1] HHH10001007: JDBC isolation level: (org.hibernate.orm.connections.pooling.c3p0:200) [2025-05-07 16:32:47,815] INFO [oracle-sink|task-1] Initializing c3p0 pool... com.mchange.v2.c3p0.PoolBackedDataSource@8112d9c1 [ connectionPoolDataSource -> com.mchange.v2.c3p0.WrapperConnectionPoolDataSource@c8a19a8a [ acquireIncrement -> 32, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, debugUnreturnedConnectionStackTraces -> false, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, identityToken -> 1bqomyqba1b4ud6q161a3he|5f172447, idleConnectionTestPeriod -> 300, initialPoolSize -> 5, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 5, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 5, nestedDataSource -> com.mchange.v2.c3p0.DriverManagerDataSource@926319fe [ description -> null, driverClass -> null, factoryClassLocation -> null, forceUseNamedDriverClass -> false, identityToken -> 1bqomyqba1b4ud6q161a3he|7c487076, jdbcUrl -> jdbc:oracle:thin:@99.12.11.32:1521/youbip07, properties -> {password=******, user=******} ], preferredTestQuery -> null, privilegeSpawnedThreads -> false, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false; userOverrides: {} ], dataSourceName -> null, extensions -> {}, factoryClassLocation -> null, identityToken -> 1bqomyqba1b4ud6q161a3he|361d5e43, numHelperThreads -> 3 ] (com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource:212) [2025-05-07 16:32:47,816] INFO [oracle-sink|task-0] Initializing c3p0 pool... com.mchange.v2.c3p0.PoolBackedDataSource@c1062c76 [ connectionPoolDataSource -> com.mchange.v2.c3p0.WrapperConnectionPoolDataSource@6c4018ea [ acquireIncrement -> 32, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, debugUnreturnedConnectionStackTraces -> false, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, identityToken -> 1bqomyqba1b4ud6q161a3he|3b12985b, idleConnectionTestPeriod -> 300, initialPoolSize -> 5, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 5, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 5, nestedDataSource -> com.mchange.v2.c3p0.DriverManagerDataSource@3001a1b5 [ description -> null, driverClass -> null, factoryClassLocation -> null, forceUseNamedDriverClass -> false, identityToken -> 1bqomyqba1b4ud6q161a3he|28a22392, jdbcUrl -> jdbc:oracle:thin:@99.12.11.32:1521/youbip07, properties -> {password=******, user=******} ], preferredTestQuery -> null, privilegeSpawnedThreads -> false, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false; userOverrides: {} ], dataSourceName -> null, extensions -> {}, factoryClassLocation -> null, identityToken -> 1bqomyqba1b4ud6q161a3he|38e73271, numHelperThreads -> 3 ] (com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource:212) [2025-05-07 16:32:49,126] INFO [oracle-sink|task-1] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) (org.hibernate.engine.transaction.jta.platform.internal.JtaPlatformInitiator:58) [2025-05-07 16:32:49,126] INFO [oracle-sink|task-0] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) (org.hibernate.engine.transaction.jta.platform.internal.JtaPlatformInitiator:58) [2025-05-07 16:32:49,163] INFO [oracle-sink|task-1] Using dialect io.debezium.connector.jdbc.dialect.oracle.OracleDatabaseDialect (io.debezium.connector.jdbc.dialect.DatabaseDialectResolver:44) [2025-05-07 16:32:49,163] INFO [oracle-sink|task-0] Using dialect io.debezium.connector.jdbc.dialect.oracle.OracleDatabaseDialect (io.debezium.connector.jdbc.dialect.DatabaseDialectResolver:44) [2025-05-07 16:32:49,182] INFO [oracle-sink|task-1] Database TimeZone: +00:00 (database), Asia/Shanghai (session) (io.debezium.connector.jdbc.dialect.GeneralDatabaseDialect:117) [2025-05-07 16:32:49,182] INFO [oracle-sink|task-0] Database TimeZone: +00:00 (database), Asia/Shanghai (session) (io.debezium.connector.jdbc.dialect.GeneralDatabaseDialect:117) [2025-05-07 16:32:49,185] INFO [oracle-sink|task-0] Database version 19.0.0 (io.debezium.connector.jdbc.JdbcChangeEventSink:68) [2025-05-07 16:32:49,185] INFO [oracle-sink|task-1] Database version 19.0.0 (io.debezium.connector.jdbc.JdbcChangeEventSink:68) [2025-05-07 16:32:49,185] INFO [oracle-sink|task-0] WorkerSinkTask{id=oracle-sink-0} Sink task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:325) [2025-05-07 16:32:49,185] INFO [oracle-sink|task-1] WorkerSinkTask{id=oracle-sink-1} Sink task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:325) [2025-05-07 16:32:49,185] INFO [oracle-sink|task-1] WorkerSinkTask{id=oracle-sink-1} Executing sink task (org.apache.kafka.connect.runtime.WorkerSinkTask:211) [2025-05-07 16:32:49,185] INFO [oracle-sink|task-0] WorkerSinkTask{id=oracle-sink-0} Executing sink task (org.apache.kafka.connect.runtime.WorkerSinkTask:211) [2025-05-07 16:32:49,192] INFO [oracle-sink|task-0] [Consumer clientId=connector-consumer-oracle-sink-0, groupId=connect-oracle-sink] Cluster ID: -OqG97XARMKciVb2KLRj7Q (org.apache.kafka.clients.Metadata:365) [2025-05-07 16:32:49,192] INFO [oracle-sink|task-0] [Consumer clientId=connector-consumer-oracle-sink-0, groupId=connect-oracle-sink] Discovered group coordinator iZuf66nl2clxz2d4rj261wZ:9092 (id: 2147483647 rack: null) (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:937) [2025-05-07 16:32:49,192] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Cluster ID: -OqG97XARMKciVb2KLRj7Q (org.apache.kafka.clients.Metadata:365) [2025-05-07 16:32:49,193] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Discovered group coordinator iZuf66nl2clxz2d4rj261wZ:9092 (id: 2147483647 rack: null) (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:937) [2025-05-07 16:32:49,193] INFO [oracle-sink|task-0] [Consumer clientId=connector-consumer-oracle-sink-0, groupId=connect-oracle-sink] (Re-)joining group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:605) [2025-05-07 16:32:49,193] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] (Re-)joining group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:605) [2025-05-07 16:32:49,201] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Request joining group due to: need to re-join with the given member-id: connector-consumer-oracle-sink-1-f5a77b33-a9e9-45cd-8d2a-5c9c41107af6 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1103) [2025-05-07 16:32:49,201] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] (Re-)joining group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:605) [2025-05-07 16:32:49,202] INFO [oracle-sink|task-0] [Consumer clientId=connector-consumer-oracle-sink-0, groupId=connect-oracle-sink] Request joining group due to: need to re-join with the given member-id: connector-consumer-oracle-sink-0-d48257f5-83f6-481f-962e-84052cd1afdf (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1103) [2025-05-07 16:32:49,202] INFO [oracle-sink|task-0] [Consumer clientId=connector-consumer-oracle-sink-0, groupId=connect-oracle-sink] (Re-)joining group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:605) [2025-05-07 16:32:49,218] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Successfully joined group with generation Generation{generationId=1, memberId='connector-consumer-oracle-sink-1-f5a77b33-a9e9-45cd-8d2a-5c9c41107af6', protocol='range'} (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:666) [2025-05-07 16:32:49,218] INFO [oracle-sink|task-0] [Consumer clientId=connector-consumer-oracle-sink-0, groupId=connect-oracle-sink] Successfully joined group with generation Generation{generationId=1, memberId='connector-consumer-oracle-sink-0-d48257f5-83f6-481f-962e-84052cd1afdf', protocol='range'} (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:666) [2025-05-07 16:32:49,227] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Finished assignment for group at generation 1: {connector-consumer-oracle-sink-1-f5a77b33-a9e9-45cd-8d2a-5c9c41107af6=Assignment(partitions=[]), connector-consumer-oracle-sink-0-d48257f5-83f6-481f-962e-84052cd1afdf=Assignment(partitions=[OMS.dev_kafka.products-0])} (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:664) [2025-05-07 16:32:49,231] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Successfully synced group in generation Generation{generationId=1, memberId='connector-consumer-oracle-sink-1-f5a77b33-a9e9-45cd-8d2a-5c9c41107af6', protocol='range'} (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:843) [2025-05-07 16:32:49,231] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Notifying assignor about the new Assignment(partitions=[]) (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:324) [2025-05-07 16:32:49,231] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Adding newly assigned partitions: (org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker:58) [2025-05-07 16:32:49,231] INFO [oracle-sink|task-0] [Consumer clientId=connector-consumer-oracle-sink-0, groupId=connect-oracle-sink] Successfully synced group in generation Generation{generationId=1, memberId='connector-consumer-oracle-sink-0-d48257f5-83f6-481f-962e-84052cd1afdf', protocol='range'} (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:843) [2025-05-07 16:32:49,232] INFO [oracle-sink|task-0] [Consumer clientId=connector-consumer-oracle-sink-0, groupId=connect-oracle-sink] Notifying assignor about the new Assignment(partitions=[OMS.dev_kafka.products-0]) (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:324) [2025-05-07 16:32:49,232] INFO [oracle-sink|task-0] [Consumer clientId=connector-consumer-oracle-sink-0, groupId=connect-oracle-sink] Adding newly assigned partitions: OMS.dev_kafka.products-0 (org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker:58) [2025-05-07 16:32:49,241] INFO [oracle-sink|task-0] [Consumer clientId=connector-consumer-oracle-sink-0, groupId=connect-oracle-sink] Found no committed offset for partition OMS.dev_kafka.products-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1508) [2025-05-07 16:32:49,242] INFO [oracle-sink|task-0] [Consumer clientId=connector-consumer-oracle-sink-0, groupId=connect-oracle-sink] Resetting offset for partition OMS.dev_kafka.products-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:32:49,253] ERROR [oracle-sink|task-0] Error transforming field names: Invalid value: null used for required field: "ID", schema type: INT32 (io.debezium.connector.jdbc.transforms.FieldNameTransformation:142) org.apache.kafka.connect.errors.DataException: Invalid value: null used for required field: "ID", schema type: INT32 at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:224) at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:217) at org.apache.kafka.connect.data.Struct.validate(Struct.java:233) at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:254) at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:217) at org.apache.kafka.connect.data.Struct.put(Struct.java:216) at org.apache.kafka.connect.data.Struct.put(Struct.java:203) at java.base/java.util.HashMap.forEach(HashMap.java:1421) at io.debezium.connector.jdbc.transforms.FieldNameTransformation.transformValue(FieldNameTransformation.java:189) at io.debezium.connector.jdbc.transforms.FieldNameTransformation.apply(FieldNameTransformation.java:129) at org.apache.kafka.connect.runtime.TransformationStage.apply(TransformationStage.java:57) at org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:57) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:208) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:245) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:180) at org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:57) at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:565) at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:518) at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:344) at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:247) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:216) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:226) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:281) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:238) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) [2025-05-07 16:32:49,256] ERROR [oracle-sink|task-0] Error encountered in task oracle-sink-0. Executing stage 'TRANSFORMATION' with class 'io.debezium.connector.jdbc.transforms.FieldNameTransformation', where consumed record is {topic='OMS.dev_kafka.products', partition=0, offset=0, timestamp=1746606751375, timestampType=CreateTime}. (org.apache.kafka.connect.runtime.errors.LogReporter:70) org.apache.kafka.connect.errors.DataException: Invalid value: null used for required field: "ID", schema type: INT32 at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:224) at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:217) at org.apache.kafka.connect.data.Struct.validate(Struct.java:233) at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:254) at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:217) at org.apache.kafka.connect.data.Struct.put(Struct.java:216) at org.apache.kafka.connect.data.Struct.put(Struct.java:203) at java.base/java.util.HashMap.forEach(HashMap.java:1421) at io.debezium.connector.jdbc.transforms.FieldNameTransformation.transformValue(FieldNameTransformation.java:189) at io.debezium.connector.jdbc.transforms.FieldNameTransformation.apply(FieldNameTransformation.java:129) at org.apache.kafka.connect.runtime.TransformationStage.apply(TransformationStage.java:57) at org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:57) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:208) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:245) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:180) at org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:57) at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:565) at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:518) at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:344) at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:247) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:216) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:226) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:281) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:238) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) [2025-05-07 16:32:49,258] ERROR [oracle-sink|task-0] WorkerSinkTask{id=oracle-sink-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:234) org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:261) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:180) at org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:57) at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:565) at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:518) at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:344) at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:247) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:216) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:226) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:281) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:238) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: org.apache.kafka.connect.errors.DataException: Invalid value: null used for required field: "ID", schema type: INT32 at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:224) at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:217) at org.apache.kafka.connect.data.Struct.validate(Struct.java:233) at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:254) at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:217) at org.apache.kafka.connect.data.Struct.put(Struct.java:216) at org.apache.kafka.connect.data.Struct.put(Struct.java:203) at java.base/java.util.HashMap.forEach(HashMap.java:1421) at io.debezium.connector.jdbc.transforms.FieldNameTransformation.transformValue(FieldNameTransformation.java:189) at io.debezium.connector.jdbc.transforms.FieldNameTransformation.apply(FieldNameTransformation.java:129) at org.apache.kafka.connect.runtime.TransformationStage.apply(TransformationStage.java:57) at org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:57) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:208) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:245) ... 15 more [2025-05-07 16:32:49,258] INFO [oracle-sink|task-0] Closing session. (io.debezium.connector.jdbc.JdbcChangeEventSink:270) [2025-05-07 16:32:49,259] INFO [oracle-sink|task-0] Closing the session factory (io.debezium.connector.jdbc.JdbcSinkConnectorTask:187) [2025-05-07 16:32:49,262] INFO [oracle-sink|task-0] [Consumer clientId=connector-consumer-oracle-sink-0, groupId=connect-oracle-sink] Revoke previously assigned partitions OMS.dev_kafka.products-0 (org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker:80) [2025-05-07 16:32:49,262] INFO [oracle-sink|task-0] [Consumer clientId=connector-consumer-oracle-sink-0, groupId=connect-oracle-sink] Member connector-consumer-oracle-sink-0-d48257f5-83f6-481f-962e-84052cd1afdf sending LeaveGroup request to coordinator iZuf66nl2clxz2d4rj261wZ:9092 (id: 2147483647 rack: null) due to the consumer is being closed (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1174) [2025-05-07 16:32:49,263] INFO [oracle-sink|task-0] [Consumer clientId=connector-consumer-oracle-sink-0, groupId=connect-oracle-sink] Resetting generation and member id due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1056) [2025-05-07 16:32:49,263] INFO [oracle-sink|task-0] [Consumer clientId=connector-consumer-oracle-sink-0, groupId=connect-oracle-sink] Request joining group due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1103) [2025-05-07 16:32:49,749] INFO [oracle-sink|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:685) [2025-05-07 16:32:49,749] INFO [oracle-sink|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:689) [2025-05-07 16:32:49,750] INFO [oracle-sink|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:689) [2025-05-07 16:32:49,750] INFO [oracle-sink|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:695) [2025-05-07 16:32:49,752] INFO [oracle-sink|task-0] App info kafka.consumer for connector-consumer-oracle-sink-0 unregistered (org.apache.kafka.common.utils.AppInfoParser:89) [2025-05-07 16:32:52,219] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Request joining group due to: group is already rebalancing (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1103) [2025-05-07 16:32:52,221] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Revoke previously assigned partitions (org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker:80) [2025-05-07 16:32:52,222] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] (Re-)joining group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:605) [2025-05-07 16:32:52,223] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Successfully joined group with generation Generation{generationId=2, memberId='connector-consumer-oracle-sink-1-f5a77b33-a9e9-45cd-8d2a-5c9c41107af6', protocol='range'} (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:666) [2025-05-07 16:32:52,223] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Finished assignment for group at generation 2: {connector-consumer-oracle-sink-1-f5a77b33-a9e9-45cd-8d2a-5c9c41107af6=Assignment(partitions=[OMS.dev_kafka.products-0])} (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:664) [2025-05-07 16:32:52,225] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Successfully synced group in generation Generation{generationId=2, memberId='connector-consumer-oracle-sink-1-f5a77b33-a9e9-45cd-8d2a-5c9c41107af6', protocol='range'} (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:843) [2025-05-07 16:32:52,226] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Notifying assignor about the new Assignment(partitions=[OMS.dev_kafka.products-0]) (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:324) [2025-05-07 16:32:52,226] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Adding newly assigned partitions: OMS.dev_kafka.products-0 (org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker:58) [2025-05-07 16:32:52,226] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Found no committed offset for partition OMS.dev_kafka.products-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1508) [2025-05-07 16:32:52,227] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Resetting offset for partition OMS.dev_kafka.products-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[iZuf66nl2clxz2d4rj261wZ:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:407) [2025-05-07 16:32:52,232] ERROR [oracle-sink|task-1] Error transforming field names: Invalid value: null used for required field: "ID", schema type: INT32 (io.debezium.connector.jdbc.transforms.FieldNameTransformation:142) org.apache.kafka.connect.errors.DataException: Invalid value: null used for required field: "ID", schema type: INT32 at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:224) at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:217) at org.apache.kafka.connect.data.Struct.validate(Struct.java:233) at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:254) at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:217) at org.apache.kafka.connect.data.Struct.put(Struct.java:216) at org.apache.kafka.connect.data.Struct.put(Struct.java:203) at java.base/java.util.HashMap.forEach(HashMap.java:1421) at io.debezium.connector.jdbc.transforms.FieldNameTransformation.transformValue(FieldNameTransformation.java:189) at io.debezium.connector.jdbc.transforms.FieldNameTransformation.apply(FieldNameTransformation.java:129) at org.apache.kafka.connect.runtime.TransformationStage.apply(TransformationStage.java:57) at org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:57) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:208) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:245) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:180) at org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:57) at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:565) at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:518) at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:344) at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:247) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:216) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:226) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:281) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:238) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) [2025-05-07 16:32:52,232] ERROR [oracle-sink|task-1] Error encountered in task oracle-sink-1. Executing stage 'TRANSFORMATION' with class 'io.debezium.connector.jdbc.transforms.FieldNameTransformation', where consumed record is {topic='OMS.dev_kafka.products', partition=0, offset=0, timestamp=1746606751375, timestampType=CreateTime}. (org.apache.kafka.connect.runtime.errors.LogReporter:70) org.apache.kafka.connect.errors.DataException: Invalid value: null used for required field: "ID", schema type: INT32 at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:224) at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:217) at org.apache.kafka.connect.data.Struct.validate(Struct.java:233) at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:254) at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:217) at org.apache.kafka.connect.data.Struct.put(Struct.java:216) at org.apache.kafka.connect.data.Struct.put(Struct.java:203) at java.base/java.util.HashMap.forEach(HashMap.java:1421) at io.debezium.connector.jdbc.transforms.FieldNameTransformation.transformValue(FieldNameTransformation.java:189) at io.debezium.connector.jdbc.transforms.FieldNameTransformation.apply(FieldNameTransformation.java:129) at org.apache.kafka.connect.runtime.TransformationStage.apply(TransformationStage.java:57) at org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:57) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:208) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:245) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:180) at org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:57) at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:565) at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:518) at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:344) at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:247) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:216) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:226) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:281) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:238) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) [2025-05-07 16:32:52,233] ERROR [oracle-sink|task-1] WorkerSinkTask{id=oracle-sink-1} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:234) org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:261) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:180) at org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:57) at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:565) at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:518) at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:344) at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:247) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:216) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:226) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:281) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:238) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: org.apache.kafka.connect.errors.DataException: Invalid value: null used for required field: "ID", schema type: INT32 at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:224) at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:217) at org.apache.kafka.connect.data.Struct.validate(Struct.java:233) at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:254) at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:217) at org.apache.kafka.connect.data.Struct.put(Struct.java:216) at org.apache.kafka.connect.data.Struct.put(Struct.java:203) at java.base/java.util.HashMap.forEach(HashMap.java:1421) at io.debezium.connector.jdbc.transforms.FieldNameTransformation.transformValue(FieldNameTransformation.java:189) at io.debezium.connector.jdbc.transforms.FieldNameTransformation.apply(FieldNameTransformation.java:129) at org.apache.kafka.connect.runtime.TransformationStage.apply(TransformationStage.java:57) at org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:57) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:208) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:245) ... 15 more [2025-05-07 16:32:52,233] INFO [oracle-sink|task-1] Closing session. (io.debezium.connector.jdbc.JdbcChangeEventSink:270) [2025-05-07 16:32:52,233] INFO [oracle-sink|task-1] Closing the session factory (io.debezium.connector.jdbc.JdbcSinkConnectorTask:187) [2025-05-07 16:32:52,235] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Revoke previously assigned partitions OMS.dev_kafka.products-0 (org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker:80) [2025-05-07 16:32:52,235] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Member connector-consumer-oracle-sink-1-f5a77b33-a9e9-45cd-8d2a-5c9c41107af6 sending LeaveGroup request to coordinator iZuf66nl2clxz2d4rj261wZ:9092 (id: 2147483647 rack: null) due to the consumer is being closed (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1174) [2025-05-07 16:32:52,235] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Resetting generation and member id due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1056) [2025-05-07 16:32:52,235] INFO [oracle-sink|task-1] [Consumer clientId=connector-consumer-oracle-sink-1, groupId=connect-oracle-sink] Request joining group due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1103) [2025-05-07 16:32:52,731] INFO [oracle-sink|task-1] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:685) [2025-05-07 16:32:52,731] INFO [oracle-sink|task-1] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:689) [2025-05-07 16:32:52,731] INFO [oracle-sink|task-1] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:689) [2025-05-07 16:32:52,732] INFO [oracle-sink|task-1] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:695) [2025-05-07 16:32:52,734] INFO [oracle-sink|task-1] App info kafka.consumer for connector-consumer-oracle-sink-1 unregistered (org.apache.kafka.common.utils.AppInfoParser:89)