2021-02-24 10:32:08,817 INFO || WorkerInfo values: jvm.args = -Xms256M, -Xmx2G, -XX:+UseG1GC, -XX:MaxGCPauseMillis=20, -XX:InitiatingHeapOccupancyPercent=35, -XX:+ExplicitGCInvokesConcurrent, -XX:MaxInlineLevel=15, -Djava.awt.headless=true, -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Dkafka.logs.dir=/opt/confluent/kafka-distributed/bin/../logs, -Dlog4j.configuration=file:./bin/../etc/kafka/connect-log4j.properties jvm.spec = Oracle Corporation, Java HotSpot(TM) 64-Bit Server VM, 1.8.0_201, 25.201-b09 jvm.classpath = .:/usr/java/jdk1.8.0_201/lib/dt.jar:/usr/java/jdk1.8.0_201/lib/tools.jar:/opt/confluent/kafka-distributed/share/java/kafka/kafka_2.12-5.5.1-ccs-javadoc.jar:/opt/confluent/kafka-distributed/share/java/kafka/jackson-annotations-2.10.2.jar:/opt/confluent/kafka-distributed/share/java/kafka/jackson-core-2.10.2.jar:/opt/confluent/kafka-distributed/share/java/kafka/maven-artifact-3.6.3.jar:/opt/confluent/kafka-distributed/share/java/kafka/jetty-servlet-9.4.24.v20191120.jar:/opt/confluent/kafka-distributed/share/java/kafka/scala-collection-compat_2.12-2.1.3.jar:/opt/confluent/kafka-distributed/share/java/kafka/connect-api-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka/commons-compress-1.19.jar:/opt/confluent/kafka-distributed/share/java/kafka/audience-annotations-0.5.0.jar:/opt/confluent/kafka-distributed/share/java/kafka/connect-file-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka/jakarta.activation-api-1.2.1.jar:/opt/confluent/kafka-distributed/share/java/kafka/netty-transport-4.1.48.Final.jar:/opt/confluent/kafka-distributed/share/java/kafka/slf4j-api-1.7.30.jar:/opt/confluent/kafka-distributed/share/java/kafka/jetty-continuation-9.4.24.v20191120.jar:/opt/confluent/kafka-distributed/share/java/kafka/jackson-jaxrs-base-2.10.2.jar:/opt/confluent/kafka-distributed/share/java/kafka/netty-buffer-4.1.48.Final.jar:/opt/confluent/kafka-distributed/share/java/kafka/validation-api-2.0.1.Final.jar:/opt/confluent/kafka-distributed/share/java/kafka/connect-json-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka/kafka-streams-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka/osgi-resource-locator-1.0.1.jar:/opt/confluent/kafka-distributed/share/java/kafka/javassist-3.26.0-GA.jar:/opt/confluent/kafka-distributed/share/java/kafka/jackson-databind-2.10.2.jar:/opt/confluent/kafka-distributed/share/java/kafka/zstd-jni-1.4.4-7.jar:/opt/confluent/kafka-distributed/share/java/kafka/jetty-util-9.4.24.v20191120.jar:/opt/confluent/kafka-distributed/share/java/kafka/jersey-container-servlet-core-2.28.jar:/opt/confluent/kafka-distributed/share/java/kafka/jersey-media-jaxb-2.28.jar:/opt/confluent/kafka-distributed/share/java/kafka/netty-codec-4.1.48.Final.jar:/opt/confluent/kafka-distributed/share/java/kafka/rocksdbjni-5.18.3.jar:/opt/confluent/kafka-distributed/share/java/kafka/javassist-3.22.0-CR2.jar:/opt/confluent/kafka-distributed/share/java/kafka/jersey-client-2.28.jar:/opt/confluent/kafka-distributed/share/java/kafka/hk2-utils-2.5.0.jar:/opt/confluent/kafka-distributed/share/java/kafka/kafka_2.12-5.5.1-ccs-test-sources.jar:/opt/confluent/kafka-distributed/share/java/kafka/commons-logging-1.2.jar:/opt/confluent/kafka-distributed/share/java/kafka/kafka_2.12-5.5.1-ccs-scaladoc.jar:/opt/confluent/kafka-distributed/share/java/kafka/hk2-locator-2.5.0.jar:/opt/confluent/kafka-distributed/share/java/kafka/httpmime-4.5.11.jar:/opt/confluent/kafka-distributed/share/java/kafka/jackson-datatype-jdk8-2.10.2.jar:/opt/confluent/kafka-distributed/share/java/kafka/commons-codec-1.11.jar:/opt/confluent/kafka-distributed/share/java/kafka/jersey-server-2.28.jar:/opt/confluent/kafka-distributed/share/java/kafka/hk2-api-2.5.0.jar:/opt/confluent/kafka-distributed/share/java/kafka/jakarta.annotation-api-1.3.4.jar:/opt/confluent/kafka-distributed/share/java/kafka/kafka_2.12-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka/connect-basic-auth-extension-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka/jetty-http-9.4.24.v20191120.jar:/opt/confluent/kafka-distributed/share/java/kafka/netty-transport-native-unix-common-4.1.48.Final.jar:/opt/confluent/kafka-distributed/share/java/kafka/support-metrics-common-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka/commons-cli-1.4.jar:/opt/confluent/kafka-distributed/share/java/kafka/jackson-jaxrs-json-provider-2.10.2.jar:/opt/confluent/kafka-distributed/share/java/kafka/kafka-log4j-appender-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka/httpclient-4.5.11.jar:/opt/confluent/kafka-distributed/share/java/kafka/zookeeper-3.5.8.jar:/opt/confluent/kafka-distributed/share/java/kafka/jackson-dataformat-csv-2.10.2.jar:/opt/confluent/kafka-distributed/share/java/kafka/jetty-client-9.4.24.v20191120.jar:/opt/confluent/kafka-distributed/share/java/kafka/lz4-java-1.7.1.jar:/opt/confluent/kafka-distributed/share/java/kafka/connect-runtime-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka/aopalliance-repackaged-2.5.0.jar:/opt/confluent/kafka-distributed/share/java/kafka/scala-library-2.12.10.jar:/opt/confluent/kafka-distributed/share/java/kafka/metrics-core-2.2.0.jar:/opt/confluent/kafka-distributed/share/java/kafka/kafka-tools-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka/httpcore-4.4.13.jar:/opt/confluent/kafka-distributed/share/java/kafka/scala-java8-compat_2.12-0.9.0.jar:/opt/confluent/kafka-distributed/share/java/kafka/jackson-module-scala_2.12-2.10.2.jar:/opt/confluent/kafka-distributed/share/java/kafka/jetty-server-9.4.24.v20191120.jar:/opt/confluent/kafka-distributed/share/java/kafka/jetty-io-9.4.24.v20191120.jar:/opt/confluent/kafka-distributed/share/java/kafka/netty-handler-4.1.48.Final.jar:/opt/confluent/kafka-distributed/share/java/kafka/jakarta.xml.bind-api-2.3.2.jar:/opt/confluent/kafka-distributed/share/java/kafka/netty-transport-native-epoll-4.1.48.Final.jar:/opt/confluent/kafka-distributed/share/java/kafka/jersey-hk2-2.28.jar:/opt/confluent/kafka-distributed/share/java/kafka/netty-common-4.1.48.Final.jar:/opt/confluent/kafka-distributed/share/java/kafka/reflections-0.9.12.jar:/opt/confluent/kafka-distributed/share/java/kafka/kafka-streams-scala_2.12-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka/argparse4j-0.7.0.jar:/opt/confluent/kafka-distributed/share/java/kafka/kafka-streams-test-utils-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka/activation-1.1.1.jar:/opt/confluent/kafka-distributed/share/java/kafka/jackson-module-paranamer-2.10.2.jar:/opt/confluent/kafka-distributed/share/java/kafka/support-metrics-client-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka/jetty-security-9.4.24.v20191120.jar:/opt/confluent/kafka-distributed/share/java/kafka/kafka_2.12-5.5.1-ccs-test.jar:/opt/confluent/kafka-distributed/share/java/kafka/kafka-streams-examples-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka/jersey-common-2.28.jar:/opt/confluent/kafka-distributed/share/java/kafka/scala-reflect-2.12.10.jar:/opt/confluent/kafka-distributed/share/java/kafka/jaxb-api-2.3.0.jar:/opt/confluent/kafka-distributed/share/java/kafka/connect-transforms-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka/javax.servlet-api-3.1.0.jar:/opt/confluent/kafka-distributed/share/java/kafka/slf4j-log4j12-1.7.30.jar:/opt/confluent/kafka-distributed/share/java/kafka/jackson-module-jaxb-annotations-2.10.2.jar:/opt/confluent/kafka-distributed/share/java/kafka/jakarta.ws.rs-api-2.1.5.jar:/opt/confluent/kafka-distributed/share/java/kafka/commons-lang3-3.8.1.jar:/opt/confluent/kafka-distributed/share/java/kafka/plexus-utils-3.2.1.jar:/opt/confluent/kafka-distributed/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/opt/confluent/kafka-distributed/share/java/kafka/scala-logging_2.12-3.9.2.jar:/opt/confluent/kafka-distributed/share/java/kafka/jersey-container-servlet-2.28.jar:/opt/confluent/kafka-distributed/share/java/kafka/jakarta.inject-2.5.0.jar:/opt/confluent/kafka-distributed/share/java/kafka/log4j-1.2.17.jar:/opt/confluent/kafka-distributed/share/java/kafka/zookeeper-jute-3.5.8.jar:/opt/confluent/kafka-distributed/share/java/kafka/netty-resolver-4.1.48.Final.jar:/opt/confluent/kafka-distributed/share/java/kafka/connect-mirror-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka/kafka.jar:/opt/confluent/kafka-distributed/share/java/kafka/connect-mirror-client-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka/jopt-simple-5.0.4.jar:/opt/confluent/kafka-distributed/share/java/kafka/kafka_2.12-5.5.1-ccs-sources.jar:/opt/confluent/kafka-distributed/share/java/kafka/snappy-java-1.1.7.3.jar:/opt/confluent/kafka-distributed/share/java/kafka/paranamer-2.8.jar:/opt/confluent/kafka-distributed/share/java/kafka/avro-1.9.2.jar:/opt/confluent/kafka-distributed/share/java/kafka/jetty-servlets-9.4.24.v20191120.jar:/opt/confluent/kafka-distributed/share/java/kafka/kafka-clients-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka/kafka-metric-reporter-1.0-SNAPSHOT.jar:/opt/confluent/kafka-distributed/share/java/kafka/transform-1.0-SNAPSHOT.jar:/opt/confluent/kafka-distributed/share/java/confluent-common/build-tools-5.5.1.jar:/opt/confluent/kafka-distributed/share/java/confluent-common/common-metrics-5.5.1.jar:/opt/confluent/kafka-distributed/share/java/confluent-common/common-utils-5.5.1.jar:/opt/confluent/kafka-distributed/share/java/confluent-common/common-config-5.5.1.jar:/opt/confluent/kafka-distributed/share/java/confluent-common/slf4j-api-1.7.26.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/classgraph-4.8.21.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kafka-connect-json-schema-converter-5.5.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kafka-json-schema-provider-5.5.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kafka-schema-serializer-5.5.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/jackson-annotations-2.10.2.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kafka-streams-protobuf-serde-5.5.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/jackson-core-2.10.2.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kotlin-stdlib-jdk8-1.3.71.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kafka-streams-json-schema-serde-5.5.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kafka-connect-avro-converter-5.5.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/mbknor-jackson-jsonschema_2.12-1.0.39.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/jakarta.inject-2.6.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/commons-compress-1.19.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kotlin-scripting-compiler-impl-embeddable-1.3.50.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/error_prone_annotations-2.3.4.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kotlin-script-runtime-1.3.50.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/guava-24.0-jre.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/checker-compat-qual-2.0.0.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kotlin-stdlib-1.3.71.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/validation-api-2.0.1.Final.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/jersey-common-2.30.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kafka-streams-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kotlin-scripting-compiler-embeddable-1.3.50.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kotlin-scripting-jvm-1.3.50.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kotlinx-coroutines-core-common-1.1.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/jackson-datatype-guava-2.10.2.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/jackson-databind-2.10.2.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kotlin-stdlib-jdk7-1.3.71.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kotlin-stdlib-common-1.3.71.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kafka-streams-avro-serde-5.5.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/animal-sniffer-annotations-1.14.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/rocksdbjni-5.18.3.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/swagger-annotations-1.6.0.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kafka-schema-registry-client-5.5.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/commons-logging-1.2.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/commons-collections-3.2.2.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/jakarta.ws.rs-api-2.1.6.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/jackson-datatype-jdk8-2.10.2.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kotlinx-coroutines-core-1.1.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/jackson-module-parameter-names-2.10.2.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/handy-uri-templates-2.1.8.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/jakarta.annotation-api-1.3.5.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kotlin-scripting-common-1.3.50.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/protobuf-java-util-3.11.4.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/joda-time-2.9.9.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/scala-library-2.12.10.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kafka-protobuf-provider-5.5.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/annotations-13.0.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/protobuf-java-3.11.4.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kafka-protobuf-serializer-5.5.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/re2j-1.3.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/wire-schema-3.2.2.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/javax.ws.rs-api-2.1.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/okio-2.5.0.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/commons-validator-1.6.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kafka-connect-protobuf-converter-5.5.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kafka-json-serializer-5.5.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kotlin-reflect-1.3.50.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/j2objc-annotations-1.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/jsr305-1.3.9.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/osgi-resource-locator-1.0.3.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/wire-runtime-3.2.2.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/jackson-datatype-jsr310-2.10.2.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/json-20190722.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kafka-json-schema-serializer-5.5.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/jackson-datatype-joda-2.10.2.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/org.everit.json.schema-1.12.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/avro-1.9.2.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/commons-digester-1.8.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/gson-2.8.5.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kafka-avro-serializer-5.5.1.jar:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/kafka-connect-avro-data-6.0.0.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/kafka_2.12-5.5.1-ccs-javadoc.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jackson-annotations-2.10.2.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jackson-core-2.10.2.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/maven-artifact-3.6.3.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jetty-servlet-9.4.24.v20191120.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/scala-collection-compat_2.12-2.1.3.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/connect-api-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/commons-compress-1.19.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/audience-annotations-0.5.0.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/connect-file-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jakarta.activation-api-1.2.1.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/netty-transport-4.1.48.Final.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/slf4j-api-1.7.30.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jetty-continuation-9.4.24.v20191120.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jackson-jaxrs-base-2.10.2.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/netty-buffer-4.1.48.Final.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/validation-api-2.0.1.Final.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/connect-json-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/kafka-streams-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/osgi-resource-locator-1.0.1.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/javassist-3.26.0-GA.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jackson-databind-2.10.2.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/zstd-jni-1.4.4-7.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jetty-util-9.4.24.v20191120.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jersey-container-servlet-core-2.28.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jersey-media-jaxb-2.28.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/netty-codec-4.1.48.Final.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/rocksdbjni-5.18.3.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/javassist-3.22.0-CR2.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jersey-client-2.28.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/hk2-utils-2.5.0.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/kafka_2.12-5.5.1-ccs-test-sources.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/commons-logging-1.2.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/kafka_2.12-5.5.1-ccs-scaladoc.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/hk2-locator-2.5.0.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/httpmime-4.5.11.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jackson-datatype-jdk8-2.10.2.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/commons-codec-1.11.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jersey-server-2.28.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/hk2-api-2.5.0.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jakarta.annotation-api-1.3.4.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/kafka_2.12-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/connect-basic-auth-extension-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jetty-http-9.4.24.v20191120.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.48.Final.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/support-metrics-common-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/commons-cli-1.4.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.10.2.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/kafka-log4j-appender-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/httpclient-4.5.11.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/zookeeper-3.5.8.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jackson-dataformat-csv-2.10.2.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jetty-client-9.4.24.v20191120.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/lz4-java-1.7.1.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/connect-runtime-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/aopalliance-repackaged-2.5.0.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/scala-library-2.12.10.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/metrics-core-2.2.0.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/kafka-tools-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/httpcore-4.4.13.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/scala-java8-compat_2.12-0.9.0.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jackson-module-scala_2.12-2.10.2.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jetty-server-9.4.24.v20191120.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jetty-io-9.4.24.v20191120.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/netty-handler-4.1.48.Final.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.2.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/netty-transport-native-epoll-4.1.48.Final.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jersey-hk2-2.28.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/netty-common-4.1.48.Final.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/reflections-0.9.12.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/kafka-streams-scala_2.12-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/argparse4j-0.7.0.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/kafka-streams-test-utils-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/activation-1.1.1.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jackson-module-paranamer-2.10.2.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/support-metrics-client-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jetty-security-9.4.24.v20191120.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/kafka_2.12-5.5.1-ccs-test.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/kafka-streams-examples-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jersey-common-2.28.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/scala-reflect-2.12.10.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jaxb-api-2.3.0.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/connect-transforms-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/slf4j-log4j12-1.7.30.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.10.2.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.5.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/plexus-utils-3.2.1.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/scala-logging_2.12-3.9.2.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jersey-container-servlet-2.28.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jakarta.inject-2.5.0.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/log4j-1.2.17.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/zookeeper-jute-3.5.8.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/netty-resolver-4.1.48.Final.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/connect-mirror-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/kafka.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/connect-mirror-client-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/kafka_2.12-5.5.1-ccs-sources.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/snappy-java-1.1.7.3.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/paranamer-2.8.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/avro-1.9.2.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/jetty-servlets-9.4.24.v20191120.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/kafka-clients-5.5.1-ccs.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/kafka-metric-reporter-1.0-SNAPSHOT.jar:/opt/confluent/kafka-distributed/bin/../share/java/kafka/transform-1.0-SNAPSHOT.jar:/opt/confluent/kafka-distributed/bin/../support-metrics-client/build/dependant-libs-2.12.10/*:/opt/confluent/kafka-distributed/bin/../support-metrics-client/build/libs/*:/usr/share/java/support-metrics-client/* os.spec = Linux, amd64, 3.10.0-693.el7.x86_64 os.vcpus = 6 [org.apache.kafka.connect.runtime.WorkerInfo] 2021-02-24 10:32:08,827 INFO || Scanning for plugin classes. This might take a moment ... [org.apache.kafka.connect.cli.ConnectDistributed] 2021-02-24 10:32:08,852 INFO || Loading plugin from: /opt/confluent/kafka-distributed/share/java/kafka [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,486 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/opt/confluent/kafka-distributed/share/java/kafka/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,486 INFO || Added plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,486 INFO || Added plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,486 INFO || Added plugin 'org.apache.kafka.connect.tools.MockSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,486 INFO || Added plugin 'org.apache.kafka.connect.tools.SchemaSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,486 INFO || Added plugin 'org.apache.kafka.connect.file.FileStreamSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,487 INFO || Added plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,487 INFO || Added plugin 'org.apache.kafka.connect.tools.VerifiableSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,487 INFO || Added plugin 'org.apache.kafka.connect.tools.VerifiableSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,487 INFO || Added plugin 'org.apache.kafka.connect.tools.MockSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,487 INFO || Added plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,487 INFO || Added plugin 'org.apache.kafka.connect.tools.MockConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,487 INFO || Added plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,487 INFO || Added plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,487 INFO || Added plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,487 INFO || Added plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,487 INFO || Added plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,487 INFO || Added plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,487 INFO || Added plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,487 INFO || Added plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,487 INFO || Added plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,487 INFO || Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,487 INFO || Added plugin 'com.datatom.kafka.connect.smt.OracleParseToPg$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'com.datatom.kafka.connect.smt.ParseAvroToDatahub$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'com.datatom.kafka.connect.smt.DbzParseToPg$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'org.apache.kafka.connect.transforms.MaskField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'com.datatom.kafka.connect.smt.OracleParseToHdfs$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'com.datatom.kafka.connect.smt.ParseAvroToDatahub$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'com.datatom.kafka.connect.smt.OracleParseToPg$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'com.datatom.kafka.connect.smt.WrappedRecord' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'org.apache.kafka.connect.transforms.RegexRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'org.apache.kafka.connect.transforms.HoistField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'org.apache.kafka.connect.transforms.ValueToKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'org.apache.kafka.connect.transforms.MaskField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'org.apache.kafka.connect.transforms.Cast$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'org.apache.kafka.connect.transforms.Cast$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'org.apache.kafka.connect.transforms.Flatten$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,488 INFO || Added plugin 'com.datatom.kafka.connect.smt.DbzParseToHdfs$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,489 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,489 INFO || Added plugin 'org.apache.kafka.connect.transforms.Flatten$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,489 INFO || Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,489 INFO || Added plugin 'com.datatom.kafka.connect.smt.WrappedRecordWithoutPK' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,489 INFO || Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,489 INFO || Added plugin 'com.datatom.kafka.connect.smt.DbzParseToPg$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,489 INFO || Added plugin 'com.datatom.kafka.connect.smt.OracleParseToHdfs$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,489 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,489 INFO || Added plugin 'org.apache.kafka.connect.transforms.HoistField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,489 INFO || Added plugin 'com.datatom.kafka.connect.smt.DbzParseToHdfs$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,489 INFO || Added plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,489 INFO || Added plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,489 INFO || Added plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,489 INFO || Added plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,489 INFO || Added plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:10,490 INFO || Loading plugin from: /opt/confluent/kafka-distributed/share/java/kafka-serde-tools [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:11,227 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/opt/confluent/kafka-distributed/share/java/kafka-serde-tools/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:11,227 INFO || Added plugin 'io.confluent.connect.avro.AvroConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:11,227 INFO || Added plugin 'io.confluent.connect.json.JsonSchemaConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:11,227 INFO || Added plugin 'io.confluent.connect.protobuf.ProtobufConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:11,228 INFO || Loading plugin from: /opt/confluent/kafka-distributed/share/java/confluent-common [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:11,238 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/opt/confluent/kafka-distributed/share/java/confluent-common/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:11,238 INFO || Loading plugin from: /opt/confluent/kafka-distributed/share/java/rest-utils [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:11,693 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/opt/confluent/kafka-distributed/share/java/rest-utils/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:11,694 INFO || Loading plugin from: /opt/confluent/kafka-distributed/share/java/kafka-connect-storage-common [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:12,981 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/opt/confluent/kafka-distributed/share/java/kafka-connect-storage-common/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:12,981 INFO || Added plugin 'io.confluent.connect.storage.tools.SchemaSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:12,981 INFO || Loading plugin from: /opt/confluent/kafka-distributed/share/java/kafka-connect-hdfs [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,202 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/opt/confluent/kafka-distributed/share/java/kafka-connect-hdfs/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,202 INFO || Added plugin 'io.confluent.connect.hdfs.HdfsSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,202 INFO || Added plugin 'io.confluent.connect.hdfs.tools.SchemaSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,202 INFO || Added plugin 'io.confluent.connect.hdfs.transforms.LongConvertDateFormat$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,202 INFO || Added plugin 'io.confluent.connect.hdfs.transforms.LongConvertDateFormat$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,234 INFO || Loading plugin from: /opt/confluent/kafka-distributed/share/java/kafka-connect-jdbc [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,380 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/opt/confluent/kafka-distributed/share/java/kafka-connect-jdbc/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,380 INFO || Added plugin 'io.confluent.connect.jdbc.JdbcSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,380 INFO || Added plugin 'io.confluent.connect.jdbc.JdbcSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,389 INFO || Loading plugin from: /opt/confluent/kafka-distributed/share/java/debezium-connector-sqlserver [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,476 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/opt/confluent/kafka-distributed/share/java/debezium-connector-sqlserver/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,476 INFO || Added plugin 'io.debezium.connector.sqlserver.SqlServerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,476 INFO || Added plugin 'io.debezium.converters.ByteBufferConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,476 INFO || Added plugin 'io.debezium.converters.CloudEventsConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,476 INFO || Added plugin 'io.debezium.transforms.ExtractNewRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,476 INFO || Added plugin 'io.debezium.transforms.outbox.EventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,476 INFO || Added plugin 'io.debezium.transforms.ByLogicalTableRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,493 INFO || Loading plugin from: /opt/confluent/kafka-distributed/share/java/debezium-connector-mysql [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,697 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/opt/confluent/kafka-distributed/share/java/debezium-connector-mysql/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,698 INFO || Added plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,698 INFO || Added plugin 'io.debezium.transforms.WrappedRecordWithoutPK' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,701 INFO || Loading plugin from: /opt/confluent/kafka-distributed/share/java/debezium-connector-postgres [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,944 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/opt/confluent/kafka-distributed/share/java/debezium-connector-postgres/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,944 INFO || Added plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:16,948 INFO || Loading plugin from: /opt/confluent/kafka-distributed/share/java/kafka-connect-oracle [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:17,197 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/opt/confluent/kafka-distributed/share/java/kafka-connect-oracle/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:17,197 INFO || Added plugin 'com.ecer.kafka.connect.oracle.OracleSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:17,197 INFO || Added plugin 'com.ecer.kafka.connect.oracle.transforms.ExtractNewRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:17,231 INFO || Loading plugin from: /opt/confluent/kafka-distributed/share/java/debezium-connector-oracle [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:17,497 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/opt/confluent/kafka-distributed/share/java/debezium-connector-oracle/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:17,498 INFO || Added plugin 'io.debezium.connector.oracle.OracleConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:17,498 INFO || Added plugin 'io.debezium.transforms.tracing.ActivateTracingSpan' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:17,521 INFO || Loading plugin from: /opt/confluent/kafka-distributed/share/java/kafka-connect-hbase [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:19,325 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/opt/confluent/kafka-distributed/share/java/kafka-connect-hbase/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:19,325 INFO || Added plugin 'com.datatom.HbaseSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,342 INFO || Registered loader: sun.misc.Launcher$AppClassLoader@764c12b6 [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,343 INFO || Added aliases 'HbaseSinkConnector' and 'HbaseSink' to plugin 'com.datatom.HbaseSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,344 INFO || Added aliases 'OracleSourceConnector' and 'OracleSource' to plugin 'com.ecer.kafka.connect.oracle.OracleSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,344 INFO || Added aliases 'HdfsSinkConnector' and 'HdfsSink' to plugin 'io.confluent.connect.hdfs.HdfsSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,344 INFO || Added aliases 'JdbcSinkConnector' and 'JdbcSink' to plugin 'io.confluent.connect.jdbc.JdbcSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,344 INFO || Added aliases 'JdbcSourceConnector' and 'JdbcSource' to plugin 'io.confluent.connect.jdbc.JdbcSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,344 INFO || Added aliases 'MySqlConnector' and 'MySql' to plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,344 INFO || Added aliases 'OracleConnector' and 'Oracle' to plugin 'io.debezium.connector.oracle.OracleConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,344 INFO || Added aliases 'PostgresConnector' and 'Postgres' to plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,345 INFO || Added aliases 'SqlServerConnector' and 'SqlServer' to plugin 'io.debezium.connector.sqlserver.SqlServerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,345 INFO || Added aliases 'FileStreamSinkConnector' and 'FileStreamSink' to plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,345 INFO || Added aliases 'FileStreamSourceConnector' and 'FileStreamSource' to plugin 'org.apache.kafka.connect.file.FileStreamSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,345 INFO || Added aliases 'MirrorCheckpointConnector' and 'MirrorCheckpoint' to plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,345 INFO || Added aliases 'MirrorHeartbeatConnector' and 'MirrorHeartbeat' to plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,345 INFO || Added aliases 'MirrorSourceConnector' and 'MirrorSource' to plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,345 INFO || Added aliases 'MockConnector' and 'Mock' to plugin 'org.apache.kafka.connect.tools.MockConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,345 INFO || Added aliases 'MockSinkConnector' and 'MockSink' to plugin 'org.apache.kafka.connect.tools.MockSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,345 INFO || Added aliases 'MockSourceConnector' and 'MockSource' to plugin 'org.apache.kafka.connect.tools.MockSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,345 INFO || Added aliases 'VerifiableSinkConnector' and 'VerifiableSink' to plugin 'org.apache.kafka.connect.tools.VerifiableSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,346 INFO || Added aliases 'VerifiableSourceConnector' and 'VerifiableSource' to plugin 'org.apache.kafka.connect.tools.VerifiableSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,346 INFO || Added aliases 'AvroConverter' and 'Avro' to plugin 'io.confluent.connect.avro.AvroConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,346 INFO || Added aliases 'JsonSchemaConverter' and 'JsonSchema' to plugin 'io.confluent.connect.json.JsonSchemaConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,346 INFO || Added aliases 'ProtobufConverter' and 'Protobuf' to plugin 'io.confluent.connect.protobuf.ProtobufConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,346 INFO || Added aliases 'ByteBufferConverter' and 'ByteBuffer' to plugin 'io.debezium.converters.ByteBufferConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,346 INFO || Added aliases 'CloudEventsConverter' and 'CloudEvents' to plugin 'io.debezium.converters.CloudEventsConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,346 INFO || Added aliases 'ByteArrayConverter' and 'ByteArray' to plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,346 INFO || Added aliases 'DoubleConverter' and 'Double' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,346 INFO || Added aliases 'FloatConverter' and 'Float' to plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,346 INFO || Added aliases 'IntegerConverter' and 'Integer' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,346 INFO || Added aliases 'LongConverter' and 'Long' to plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,347 INFO || Added aliases 'ShortConverter' and 'Short' to plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,347 INFO || Added aliases 'JsonConverter' and 'Json' to plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,347 INFO || Added aliases 'StringConverter' and 'String' to plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,347 INFO || Added aliases 'ByteBufferConverter' and 'ByteBuffer' to plugin 'io.debezium.converters.ByteBufferConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,347 INFO || Added aliases 'ByteArrayConverter' and 'ByteArray' to plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,347 INFO || Added aliases 'DoubleConverter' and 'Double' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,347 INFO || Added aliases 'FloatConverter' and 'Float' to plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,347 INFO || Added aliases 'IntegerConverter' and 'Integer' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,347 INFO || Added aliases 'LongConverter' and 'Long' to plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,347 INFO || Added aliases 'ShortConverter' and 'Short' to plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,347 INFO || Added aliases 'JsonConverter' and 'Json' to plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,347 INFO || Added alias 'SimpleHeaderConverter' to plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,347 INFO || Added aliases 'StringConverter' and 'String' to plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,348 INFO || Added alias 'WrappedRecord' to plugin 'com.datatom.kafka.connect.smt.WrappedRecord' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,348 INFO || Added alias 'ByLogicalTableRouter' to plugin 'io.debezium.transforms.ByLogicalTableRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,348 INFO || Added alias 'EventRouter' to plugin 'io.debezium.transforms.outbox.EventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,348 INFO || Added alias 'ActivateTracingSpan' to plugin 'io.debezium.transforms.tracing.ActivateTracingSpan' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,349 INFO || Added alias 'RegexRouter' to plugin 'org.apache.kafka.connect.transforms.RegexRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,349 INFO || Added alias 'TimestampRouter' to plugin 'org.apache.kafka.connect.transforms.TimestampRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,349 INFO || Added alias 'ValueToKey' to plugin 'org.apache.kafka.connect.transforms.ValueToKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,349 INFO || Added alias 'BasicAuthSecurityRestExtension' to plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,349 INFO || Added aliases 'AllConnectorClientConfigOverridePolicy' and 'All' to plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,349 INFO || Added aliases 'NoneConnectorClientConfigOverridePolicy' and 'None' to plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,349 INFO || Added aliases 'PrincipalConnectorClientConfigOverridePolicy' and 'Principal' to plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2021-02-24 10:32:21,498 INFO || DistributedConfig values: access.control.allow.methods = access.control.allow.origin = admin.listeners = null bootstrap.servers = [dn90221:9092] client.dns.lookup = default client.id = config.providers = [] config.storage.replication.factor = 1 config.storage.topic = connect-configs connect.protocol = sessioned connections.max.idle.ms = 540000 connector.client.config.override.policy = None group.id = plaintext123 header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter heartbeat.interval.ms = 3000 inter.worker.key.generation.algorithm = HmacSHA256 inter.worker.key.size = null inter.worker.key.ttl.ms = 3600000 inter.worker.signature.algorithm = HmacSHA256 inter.worker.verification.algorithms = [HmacSHA256] internal.key.converter = class org.apache.kafka.connect.json.JsonConverter internal.value.converter = class org.apache.kafka.connect.json.JsonConverter key.converter = class io.confluent.connect.avro.AvroConverter listeners = null metadata.max.age.ms = 300000 metric.reporters = [com.datatom.kafka.monitor.KafkaTopicMetricsReporter] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 offset.flush.interval.ms = 10000 offset.flush.timeout.ms = 5000 offset.storage.partitions = 25 offset.storage.replication.factor = 1 offset.storage.topic = connect-offsets plugin.path = [/opt/confluent/kafka-distributed/share/java] rebalance.timeout.ms = 60000 receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 40000 rest.advertised.host.name = null rest.advertised.listener = null rest.advertised.port = null rest.extension.classes = [] rest.host.name = 192.168.90.221 rest.port = 8083 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI scheduled.rebalance.max.delay.ms = 300000 security.protocol = PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS status.storage.partitions = 5 status.storage.replication.factor = 1 status.storage.topic = connect-status task.shutdown.graceful.timeout.ms = 5000 topic.tracking.allow.reset = true topic.tracking.enable = true value.converter = class io.confluent.connect.avro.AvroConverter worker.sync.timeout.ms = 3000 worker.unsync.backoff.ms = 300000 [org.apache.kafka.connect.runtime.distributed.DistributedConfig] 2021-02-24 10:32:21,500 INFO || Creating Kafka admin client [org.apache.kafka.connect.util.ConnectUtils] 2021-02-24 10:32:21,502 INFO || AdminClientConfig values: bootstrap.servers = [dn90221:9092] client.dns.lookup = default client.id = connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [com.datatom.kafka.monitor.KafkaTopicMetricsReporter] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:21,539 INFO || KafkaTopicMetricsReporterConfig values: metric.reporter.bootstrap.servers = dn90221:9092 metric.reporter.interval = 10 metric.reporter.metrics = source-record-poll-total,source-record-poll-rate,source-record-write-total,source-record-write-rate,sink-record-read-rate,sink-record-read-total,sink-record-send-rate,sink-record-send-total metric.reporter.topic = kafka_monitor_report producer.sasl.kerberos.service.name = producer.sasl.mechanism = GSSAPI producer.security.protocol = PLAINTEXT [com.datatom.kafka.monitor.KafkaTopicMetricsReporter$KafkaTopicMetricsReporterConfig] 2021-02-24 10:32:21,548 INFO || ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [dn90221:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = producer-1 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:21,614 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:21,614 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:21,614 INFO || Kafka startTimeMs: 1614133941613 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:21,770 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:21,770 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:21,770 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:21,770 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:21,770 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:21,770 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:21,770 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:21,771 WARN || The configuration 'max.poll.records' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:21,771 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:21,771 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:21,771 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:21,771 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:21,771 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:21,771 WARN || The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:21,771 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:21,771 WARN || The configuration 'max.poll.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:21,771 WARN || The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:21,771 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:21,771 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:21,771 INFO || Kafka startTimeMs: 1614133941771 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,003 INFO || Kafka cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.connect.util.ConnectUtils] 2021-02-24 10:32:22,004 INFO || [Producer clientId=producer-1] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,013 INFO || [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. [org.apache.kafka.clients.producer.KafkaProducer] 2021-02-24 10:32:22,039 INFO || Logging initialized @13630ms to org.eclipse.jetty.util.log.Slf4jLog [org.eclipse.jetty.util.log] 2021-02-24 10:32:22,119 INFO || Added connector for http://192.168.90.221:8083 [org.apache.kafka.connect.runtime.rest.RestServer] 2021-02-24 10:32:22,119 INFO || Initializing REST server [org.apache.kafka.connect.runtime.rest.RestServer] 2021-02-24 10:32:22,127 INFO || jetty-9.4.24.v20191120; built: 2019-11-20T21:37:49.771Z; git: 363d5f2df3a8a28de40604320230664b9c793c16; jvm 1.8.0_201-b09 [org.eclipse.jetty.server.Server] 2021-02-24 10:32:22,155 INFO || Started http_192.168.90.2218083@471b7c9c{HTTP/1.1,[http/1.1]}{192.168.90.221:8083} [org.eclipse.jetty.server.AbstractConnector] 2021-02-24 10:32:22,155 INFO || Started @13747ms [org.eclipse.jetty.server.Server] 2021-02-24 10:32:22,184 INFO || Advertised URI: http://192.168.90.221:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2021-02-24 10:32:22,185 INFO || REST server listening at http://192.168.90.221:8083/, advertising URL http://192.168.90.221:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2021-02-24 10:32:22,185 INFO || Advertised URI: http://192.168.90.221:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2021-02-24 10:32:22,185 INFO || REST admin endpoints at http://192.168.90.221:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2021-02-24 10:32:22,185 INFO || Advertised URI: http://192.168.90.221:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2021-02-24 10:32:22,191 INFO || Setting up None Policy for ConnectorClientConfigOverride. This will disallow any client configuration to be overridden [org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy] 2021-02-24 10:32:22,202 INFO || KafkaTopicMetricsReporterConfig values: metric.reporter.bootstrap.servers = dn90221:9092 metric.reporter.interval = 10 metric.reporter.metrics = source-record-poll-total,source-record-poll-rate,source-record-write-total,source-record-write-rate,sink-record-read-rate,sink-record-read-total,sink-record-send-rate,sink-record-send-total metric.reporter.topic = kafka_monitor_report producer.sasl.kerberos.service.name = producer.sasl.mechanism = GSSAPI producer.security.protocol = PLAINTEXT [com.datatom.kafka.monitor.KafkaTopicMetricsReporter$KafkaTopicMetricsReporterConfig] 2021-02-24 10:32:22,203 INFO || ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [dn90221:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = producer-2 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,210 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,211 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,211 INFO || Kafka startTimeMs: 1614133942210 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,215 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,215 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,215 INFO || [Producer clientId=producer-2] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,215 INFO || Kafka startTimeMs: 1614133942215 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,238 INFO || JsonConverterConfig values: converter.type = key decimal.format = BASE64 schemas.cache.size = 1000 schemas.enable = false [org.apache.kafka.connect.json.JsonConverterConfig] 2021-02-24 10:32:22,239 INFO || JsonConverterConfig values: converter.type = value decimal.format = BASE64 schemas.cache.size = 1000 schemas.enable = false [org.apache.kafka.connect.json.JsonConverterConfig] 2021-02-24 10:32:22,265 INFO || KafkaTopicMetricsReporterConfig values: metric.reporter.bootstrap.servers = dn90221:9092 metric.reporter.interval = 10 metric.reporter.metrics = source-record-poll-total,source-record-poll-rate,source-record-write-total,source-record-write-rate,sink-record-read-rate,sink-record-read-total,sink-record-send-rate,sink-record-send-total metric.reporter.topic = kafka_monitor_report producer.sasl.kerberos.service.name = producer.sasl.mechanism = GSSAPI producer.security.protocol = PLAINTEXT [com.datatom.kafka.monitor.KafkaTopicMetricsReporter$KafkaTopicMetricsReporterConfig] 2021-02-24 10:32:22,266 INFO || ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [dn90221:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = producer-3 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,271 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,271 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,271 INFO || Kafka startTimeMs: 1614133942271 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,277 INFO || [Producer clientId=producer-3] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,296 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,296 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,296 INFO || Kafka startTimeMs: 1614133942296 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,300 INFO || Kafka Connect distributed worker initialization took 13473ms [org.apache.kafka.connect.cli.ConnectDistributed] 2021-02-24 10:32:22,300 INFO || Kafka Connect starting [org.apache.kafka.connect.runtime.Connect] 2021-02-24 10:32:22,302 INFO || Initializing REST resources [org.apache.kafka.connect.runtime.rest.RestServer] 2021-02-24 10:32:22,302 INFO || [Worker clientId=connect-1, groupId=plaintext123] Herder starting [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:32:22,302 INFO || Worker starting [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:32:22,302 INFO || Starting KafkaOffsetBackingStore [org.apache.kafka.connect.storage.KafkaOffsetBackingStore] 2021-02-24 10:32:22,302 INFO || Starting KafkaBasedLog with topic connect-offsets [org.apache.kafka.connect.util.KafkaBasedLog] 2021-02-24 10:32:22,303 INFO || AdminClientConfig values: bootstrap.servers = [dn90221:9092] client.dns.lookup = default client.id = connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [com.datatom.kafka.monitor.KafkaTopicMetricsReporter] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,304 INFO || KafkaTopicMetricsReporterConfig values: metric.reporter.bootstrap.servers = dn90221:9092 metric.reporter.interval = 10 metric.reporter.metrics = source-record-poll-total,source-record-poll-rate,source-record-write-total,source-record-write-rate,sink-record-read-rate,sink-record-read-total,sink-record-send-rate,sink-record-send-total metric.reporter.topic = kafka_monitor_report producer.sasl.kerberos.service.name = producer.sasl.mechanism = GSSAPI producer.security.protocol = PLAINTEXT [com.datatom.kafka.monitor.KafkaTopicMetricsReporter$KafkaTopicMetricsReporterConfig] 2021-02-24 10:32:22,304 INFO || ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [dn90221:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = producer-4 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,311 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,311 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,311 INFO || Kafka startTimeMs: 1614133942311 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,323 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,323 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,323 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,323 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,323 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,323 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,323 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,323 WARN || The configuration 'max.poll.records' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,323 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,323 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,323 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,323 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,323 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,323 WARN || The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,323 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,323 WARN || The configuration 'max.poll.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,324 WARN || The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,324 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,324 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,324 INFO || Kafka startTimeMs: 1614133942324 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,325 INFO || [Producer clientId=producer-4] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,354 INFO || Adding admin resources to main listener [org.apache.kafka.connect.runtime.rest.RestServer] 2021-02-24 10:32:22,443 INFO || DefaultSessionIdManager workerName=node0 [org.eclipse.jetty.server.session] 2021-02-24 10:32:22,443 INFO || No SessionScavenger set, using defaults [org.eclipse.jetty.server.session] 2021-02-24 10:32:22,444 INFO || node0 Scavenging every 660000ms [org.eclipse.jetty.server.session] 2021-02-24 10:32:22,474 INFO || Created topic (name=connect-offsets, numPartitions=25, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at dn90221:9092 [org.apache.kafka.connect.util.TopicAdmin] 2021-02-24 10:32:22,477 INFO || [Producer clientId=producer-4] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. [org.apache.kafka.clients.producer.KafkaProducer] 2021-02-24 10:32:22,482 INFO || ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [dn90221:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = producer-5 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [com.datatom.kafka.monitor.KafkaTopicMetricsReporter] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,482 INFO || KafkaTopicMetricsReporterConfig values: metric.reporter.bootstrap.servers = dn90221:9092 metric.reporter.interval = 10 metric.reporter.metrics = source-record-poll-total,source-record-poll-rate,source-record-write-total,source-record-write-rate,sink-record-read-rate,sink-record-read-total,sink-record-send-rate,sink-record-send-total metric.reporter.topic = kafka_monitor_report producer.sasl.kerberos.service.name = producer.sasl.mechanism = GSSAPI producer.security.protocol = PLAINTEXT [com.datatom.kafka.monitor.KafkaTopicMetricsReporter$KafkaTopicMetricsReporterConfig] 2021-02-24 10:32:22,482 INFO || ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [dn90221:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = producer-6 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,486 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,486 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,486 INFO || Kafka startTimeMs: 1614133942485 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,492 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,492 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,492 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,492 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,492 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,492 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,492 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,492 WARN || The configuration 'max.poll.records' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,492 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,492 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,492 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,492 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,492 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,492 WARN || The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,492 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,492 WARN || The configuration 'max.poll.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,492 INFO || [Producer clientId=producer-6] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,492 WARN || The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,493 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,493 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,493 INFO || Kafka startTimeMs: 1614133942493 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,496 INFO || [Producer clientId=producer-5] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,502 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [dn90221:9092] check.crcs = true client.dns.lookup = default client.id = client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = plaintext123 group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 3600 max.poll.records = 200 metadata.max.age.ms = 300000 metric.reporters = [com.datatom.kafka.monitor.KafkaTopicMetricsReporter] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,503 INFO || KafkaTopicMetricsReporterConfig values: metric.reporter.bootstrap.servers = dn90221:9092 metric.reporter.interval = 10 metric.reporter.metrics = source-record-poll-total,source-record-poll-rate,source-record-write-total,source-record-write-rate,sink-record-read-rate,sink-record-read-total,sink-record-send-rate,sink-record-send-total metric.reporter.topic = kafka_monitor_report producer.sasl.kerberos.service.name = producer.sasl.mechanism = GSSAPI producer.security.protocol = PLAINTEXT [com.datatom.kafka.monitor.KafkaTopicMetricsReporter$KafkaTopicMetricsReporterConfig] 2021-02-24 10:32:22,503 INFO || ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [dn90221:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = producer-7 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,506 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,506 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,506 INFO || Kafka startTimeMs: 1614133942506 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,514 INFO || [Producer clientId=producer-7] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,538 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,538 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,538 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,538 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,538 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,538 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,538 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,538 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,538 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,538 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,538 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,538 WARN || The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,538 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,539 WARN || The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,539 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,539 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,539 INFO || Kafka startTimeMs: 1614133942539 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,545 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,565 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Subscribed to partition(s): connect-offsets-0, connect-offsets-5, connect-offsets-10, connect-offsets-20, connect-offsets-15, connect-offsets-9, connect-offsets-11, connect-offsets-4, connect-offsets-16, connect-offsets-17, connect-offsets-3, connect-offsets-24, connect-offsets-23, connect-offsets-13, connect-offsets-18, connect-offsets-22, connect-offsets-8, connect-offsets-2, connect-offsets-12, connect-offsets-19, connect-offsets-14, connect-offsets-1, connect-offsets-6, connect-offsets-7, connect-offsets-21 [org.apache.kafka.clients.consumer.KafkaConsumer] 2021-02-24 10:32:22,571 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-0 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,571 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-5 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,571 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-10 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,571 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-20 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,571 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-15 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,571 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-9 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-11 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-4 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-16 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-17 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-3 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-24 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-23 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-13 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-18 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-22 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-8 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-2 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-12 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-19 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-14 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-1 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-6 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-7 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,572 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-offsets-21 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,606 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-10 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,607 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-8 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,607 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-14 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,607 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-12 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,607 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-2 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,607 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-0 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,607 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-6 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,607 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-4 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,608 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-24 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,608 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-18 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,608 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-16 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,608 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-22 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,608 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-20 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,608 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-9 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,608 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-7 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,608 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-13 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,608 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-11 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,608 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-1 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,608 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-5 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,608 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-3 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,608 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-23 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,609 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-17 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,609 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-15 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,609 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-21 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,609 INFO || [Consumer clientId=consumer-plaintext123-1, groupId=plaintext123] Resetting offset for partition connect-offsets-19 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,609 INFO || Finished reading KafkaBasedLog for topic connect-offsets [org.apache.kafka.connect.util.KafkaBasedLog] 2021-02-24 10:32:22,609 INFO || Started KafkaBasedLog for topic connect-offsets [org.apache.kafka.connect.util.KafkaBasedLog] 2021-02-24 10:32:22,610 INFO || Finished reading offsets topic and starting KafkaOffsetBackingStore [org.apache.kafka.connect.storage.KafkaOffsetBackingStore] 2021-02-24 10:32:22,612 INFO || Worker started [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:32:22,612 INFO || Starting KafkaBasedLog with topic connect-status [org.apache.kafka.connect.util.KafkaBasedLog] 2021-02-24 10:32:22,613 INFO || AdminClientConfig values: bootstrap.servers = [dn90221:9092] client.dns.lookup = default client.id = connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [com.datatom.kafka.monitor.KafkaTopicMetricsReporter] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,614 INFO || KafkaTopicMetricsReporterConfig values: metric.reporter.bootstrap.servers = dn90221:9092 metric.reporter.interval = 10 metric.reporter.metrics = source-record-poll-total,source-record-poll-rate,source-record-write-total,source-record-write-rate,sink-record-read-rate,sink-record-read-total,sink-record-send-rate,sink-record-send-total metric.reporter.topic = kafka_monitor_report producer.sasl.kerberos.service.name = producer.sasl.mechanism = GSSAPI producer.security.protocol = PLAINTEXT [com.datatom.kafka.monitor.KafkaTopicMetricsReporter$KafkaTopicMetricsReporterConfig] 2021-02-24 10:32:22,614 INFO || ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [dn90221:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = producer-8 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,619 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,619 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,619 INFO || Kafka startTimeMs: 1614133942619 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,624 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,625 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,625 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,625 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,625 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,625 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,625 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,625 WARN || The configuration 'max.poll.records' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,625 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,625 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,625 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,625 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,625 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,625 WARN || The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,625 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,625 WARN || The configuration 'max.poll.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,625 WARN || The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,625 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,625 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,625 INFO || Kafka startTimeMs: 1614133942625 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,633 INFO || [Producer clientId=producer-8] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,713 INFO || Created topic (name=connect-status, numPartitions=5, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at dn90221:9092 [org.apache.kafka.connect.util.TopicAdmin] 2021-02-24 10:32:22,715 INFO || [Producer clientId=producer-8] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. [org.apache.kafka.clients.producer.KafkaProducer] 2021-02-24 10:32:22,718 INFO || ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [dn90221:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = producer-9 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [com.datatom.kafka.monitor.KafkaTopicMetricsReporter] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 0 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,719 INFO || KafkaTopicMetricsReporterConfig values: metric.reporter.bootstrap.servers = dn90221:9092 metric.reporter.interval = 10 metric.reporter.metrics = source-record-poll-total,source-record-poll-rate,source-record-write-total,source-record-write-rate,sink-record-read-rate,sink-record-read-total,sink-record-send-rate,sink-record-send-total metric.reporter.topic = kafka_monitor_report producer.sasl.kerberos.service.name = producer.sasl.mechanism = GSSAPI producer.security.protocol = PLAINTEXT [com.datatom.kafka.monitor.KafkaTopicMetricsReporter$KafkaTopicMetricsReporterConfig] 2021-02-24 10:32:22,719 INFO || ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [dn90221:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = producer-10 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,721 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,722 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,722 INFO || Kafka startTimeMs: 1614133942721 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,730 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,730 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,730 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,730 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,730 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,730 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,730 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,730 WARN || The configuration 'max.poll.records' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,730 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,730 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,730 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,730 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,730 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,730 WARN || The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,730 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,730 WARN || The configuration 'max.poll.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,731 WARN || The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,731 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,731 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,731 INFO || Kafka startTimeMs: 1614133942731 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,732 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [dn90221:9092] check.crcs = true client.dns.lookup = default client.id = client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = plaintext123 group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 3600 max.poll.records = 200 metadata.max.age.ms = 300000 metric.reporters = [com.datatom.kafka.monitor.KafkaTopicMetricsReporter] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,732 INFO || KafkaTopicMetricsReporterConfig values: metric.reporter.bootstrap.servers = dn90221:9092 metric.reporter.interval = 10 metric.reporter.metrics = source-record-poll-total,source-record-poll-rate,source-record-write-total,source-record-write-rate,sink-record-read-rate,sink-record-read-total,sink-record-send-rate,sink-record-send-total metric.reporter.topic = kafka_monitor_report producer.sasl.kerberos.service.name = producer.sasl.mechanism = GSSAPI producer.security.protocol = PLAINTEXT [com.datatom.kafka.monitor.KafkaTopicMetricsReporter$KafkaTopicMetricsReporterConfig] 2021-02-24 10:32:22,732 INFO || ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [dn90221:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = producer-11 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,735 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,735 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,735 INFO || Kafka startTimeMs: 1614133942735 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,737 INFO || [Producer clientId=producer-10] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,742 INFO || [Producer clientId=producer-9] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,739 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,744 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,744 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,744 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,744 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,744 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,744 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,744 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,744 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,744 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,758 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,758 WARN || The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,758 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,758 WARN || The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,760 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,760 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,760 INFO || Kafka startTimeMs: 1614133942758 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,758 INFO || [Producer clientId=producer-11] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,771 INFO || [Consumer clientId=consumer-plaintext123-2, groupId=plaintext123] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,785 INFO || [Consumer clientId=consumer-plaintext123-2, groupId=plaintext123] Subscribed to partition(s): connect-status-0, connect-status-4, connect-status-1, connect-status-2, connect-status-3 [org.apache.kafka.clients.consumer.KafkaConsumer] 2021-02-24 10:32:22,785 INFO || [Consumer clientId=consumer-plaintext123-2, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-status-0 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,785 INFO || [Consumer clientId=consumer-plaintext123-2, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-status-4 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,785 INFO || [Consumer clientId=consumer-plaintext123-2, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-status-1 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,785 INFO || [Consumer clientId=consumer-plaintext123-2, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-status-2 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,785 INFO || [Consumer clientId=consumer-plaintext123-2, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-status-3 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,812 INFO || [Consumer clientId=consumer-plaintext123-2, groupId=plaintext123] Resetting offset for partition connect-status-1 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,812 INFO || [Consumer clientId=consumer-plaintext123-2, groupId=plaintext123] Resetting offset for partition connect-status-2 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,812 INFO || [Consumer clientId=consumer-plaintext123-2, groupId=plaintext123] Resetting offset for partition connect-status-0 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,812 INFO || [Consumer clientId=consumer-plaintext123-2, groupId=plaintext123] Resetting offset for partition connect-status-3 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,812 INFO || [Consumer clientId=consumer-plaintext123-2, groupId=plaintext123] Resetting offset for partition connect-status-4 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,813 INFO || Finished reading KafkaBasedLog for topic connect-status [org.apache.kafka.connect.util.KafkaBasedLog] 2021-02-24 10:32:22,813 INFO || Started KafkaBasedLog for topic connect-status [org.apache.kafka.connect.util.KafkaBasedLog] 2021-02-24 10:32:22,818 INFO || Starting KafkaConfigBackingStore [org.apache.kafka.connect.storage.KafkaConfigBackingStore] 2021-02-24 10:32:22,819 INFO || Starting KafkaBasedLog with topic connect-configs [org.apache.kafka.connect.util.KafkaBasedLog] 2021-02-24 10:32:22,819 INFO || AdminClientConfig values: bootstrap.servers = [dn90221:9092] client.dns.lookup = default client.id = connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [com.datatom.kafka.monitor.KafkaTopicMetricsReporter] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,819 INFO || KafkaTopicMetricsReporterConfig values: metric.reporter.bootstrap.servers = dn90221:9092 metric.reporter.interval = 10 metric.reporter.metrics = source-record-poll-total,source-record-poll-rate,source-record-write-total,source-record-write-rate,sink-record-read-rate,sink-record-read-total,sink-record-send-rate,sink-record-send-total metric.reporter.topic = kafka_monitor_report producer.sasl.kerberos.service.name = producer.sasl.mechanism = GSSAPI producer.security.protocol = PLAINTEXT [com.datatom.kafka.monitor.KafkaTopicMetricsReporter$KafkaTopicMetricsReporterConfig] 2021-02-24 10:32:22,820 INFO || ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [dn90221:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = producer-12 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,828 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,829 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,829 INFO || Kafka startTimeMs: 1614133942828 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,831 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,832 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,832 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,832 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,832 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,832 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,832 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,832 WARN || The configuration 'max.poll.records' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,832 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,832 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,832 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,832 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,832 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,832 WARN || The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,832 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,832 WARN || The configuration 'max.poll.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,832 WARN || The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:32:22,832 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,832 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,832 INFO || Kafka startTimeMs: 1614133942832 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,836 INFO || [Producer clientId=producer-12] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,893 INFO || Created topic (name=connect-configs, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at dn90221:9092 [org.apache.kafka.connect.util.TopicAdmin] 2021-02-24 10:32:22,894 INFO || [Producer clientId=producer-12] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. [org.apache.kafka.clients.producer.KafkaProducer] 2021-02-24 10:32:22,897 INFO || ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [dn90221:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = producer-13 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [com.datatom.kafka.monitor.KafkaTopicMetricsReporter] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,898 INFO || KafkaTopicMetricsReporterConfig values: metric.reporter.bootstrap.servers = dn90221:9092 metric.reporter.interval = 10 metric.reporter.metrics = source-record-poll-total,source-record-poll-rate,source-record-write-total,source-record-write-rate,sink-record-read-rate,sink-record-read-total,sink-record-send-rate,sink-record-send-total metric.reporter.topic = kafka_monitor_report producer.sasl.kerberos.service.name = producer.sasl.mechanism = GSSAPI producer.security.protocol = PLAINTEXT [com.datatom.kafka.monitor.KafkaTopicMetricsReporter$KafkaTopicMetricsReporterConfig] 2021-02-24 10:32:22,898 INFO || ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [dn90221:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = producer-14 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,901 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,902 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,902 INFO || Kafka startTimeMs: 1614133942901 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,906 INFO || [Producer clientId=producer-14] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,907 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,907 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,907 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,907 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,907 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,907 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,907 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,907 WARN || The configuration 'max.poll.records' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,907 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,907 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,907 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,907 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,907 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,907 WARN || The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,907 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,907 WARN || The configuration 'max.poll.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,907 WARN || The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,907 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,907 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,907 INFO || Kafka startTimeMs: 1614133942907 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,908 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [dn90221:9092] check.crcs = true client.dns.lookup = default client.id = client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = plaintext123 group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 3600 max.poll.records = 200 metadata.max.age.ms = 300000 metric.reporters = [com.datatom.kafka.monitor.KafkaTopicMetricsReporter] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,908 INFO || KafkaTopicMetricsReporterConfig values: metric.reporter.bootstrap.servers = dn90221:9092 metric.reporter.interval = 10 metric.reporter.metrics = source-record-poll-total,source-record-poll-rate,source-record-write-total,source-record-write-rate,sink-record-read-rate,sink-record-read-total,sink-record-send-rate,sink-record-send-total metric.reporter.topic = kafka_monitor_report producer.sasl.kerberos.service.name = producer.sasl.mechanism = GSSAPI producer.security.protocol = PLAINTEXT [com.datatom.kafka.monitor.KafkaTopicMetricsReporter$KafkaTopicMetricsReporterConfig] 2021-02-24 10:32:22,908 INFO || ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [dn90221:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = producer-15 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:32:22,911 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,911 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,911 INFO || Kafka startTimeMs: 1614133942911 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,914 INFO || [Producer clientId=producer-13] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,915 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,915 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,915 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,915 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,915 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,916 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,916 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,916 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,916 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,916 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,916 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,916 WARN || The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,916 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,916 WARN || The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:32:22,916 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,916 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,916 INFO || Kafka startTimeMs: 1614133942916 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:32:22,921 INFO || [Producer clientId=producer-15] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,924 INFO || [Consumer clientId=consumer-plaintext123-3, groupId=plaintext123] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,934 INFO || [Consumer clientId=consumer-plaintext123-3, groupId=plaintext123] Subscribed to partition(s): connect-configs-0 [org.apache.kafka.clients.consumer.KafkaConsumer] 2021-02-24 10:32:22,934 INFO || [Consumer clientId=consumer-plaintext123-3, groupId=plaintext123] Seeking to EARLIEST offset of partition connect-configs-0 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,941 INFO || [Consumer clientId=consumer-plaintext123-3, groupId=plaintext123] Resetting offset for partition connect-configs-0 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:32:22,941 INFO || Finished reading KafkaBasedLog for topic connect-configs [org.apache.kafka.connect.util.KafkaBasedLog] 2021-02-24 10:32:22,941 INFO || Started KafkaBasedLog for topic connect-configs [org.apache.kafka.connect.util.KafkaBasedLog] 2021-02-24 10:32:22,941 INFO || Started KafkaConfigBackingStore [org.apache.kafka.connect.storage.KafkaConfigBackingStore] 2021-02-24 10:32:22,941 INFO || [Worker clientId=connect-1, groupId=plaintext123] Herder started [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:32:22,953 INFO || [Worker clientId=connect-1, groupId=plaintext123] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:32:22,954 INFO || [Worker clientId=connect-1, groupId=plaintext123] Discovered group coordinator 192.168.90.221:9092 (id: 2147483647 rack: null) [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:32:22,957 INFO || [Worker clientId=connect-1, groupId=plaintext123] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2021-02-24 10:32:22,957 INFO || [Worker clientId=connect-1, groupId=plaintext123] (Re-)joining group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:32:22,966 INFO || [Worker clientId=connect-1, groupId=plaintext123] Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:32:22,966 INFO || [Worker clientId=connect-1, groupId=plaintext123] (Re-)joining group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:32:22,994 INFO || [Worker clientId=connect-1, groupId=plaintext123] Successfully joined group with generation 1 [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:32:22,996 INFO || [Worker clientId=connect-1, groupId=plaintext123] Joined group at generation 1 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-0e6e4d29-5e91-4fc6-b8fa-0e06d1a2ae15', leaderUrl='http://192.168.90.221:8083/', offset=-1, connectorIds=[], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:32:22,996 INFO || [Worker clientId=connect-1, groupId=plaintext123] Starting connectors and tasks using config offset -1 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:32:22,996 INFO || [Worker clientId=connect-1, groupId=plaintext123] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:32:23,091 INFO || Started o.e.j.s.ServletContextHandler@14f8fdae{/,null,AVAILABLE} [org.eclipse.jetty.server.handler.ContextHandler] 2021-02-24 10:32:23,091 INFO || REST resources initialized; server is started and ready to handle requests [org.apache.kafka.connect.runtime.rest.RestServer] 2021-02-24 10:32:23,091 INFO || Kafka Connect started [org.apache.kafka.connect.runtime.Connect] 2021-02-24 10:32:23,094 INFO || [Worker clientId=connect-1, groupId=plaintext123] Session key updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:33:29,989 INFO || JVM Runtime does not support Modules [org.eclipse.jetty.util.TypeUtil] 2021-02-24 10:33:43,758 INFO || AbstractConfig values: [org.apache.kafka.common.config.AbstractConfig] 2021-02-24 10:33:43,766 INFO || [Worker clientId=connect-1, groupId=plaintext123] Connector dbz.logminer.source config updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:33:44,269 INFO || [Worker clientId=connect-1, groupId=plaintext123] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2021-02-24 10:33:44,269 INFO || [Worker clientId=connect-1, groupId=plaintext123] (Re-)joining group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:33:44,277 INFO || [Worker clientId=connect-1, groupId=plaintext123] Successfully joined group with generation 2 [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:33:44,277 INFO || [Worker clientId=connect-1, groupId=plaintext123] Joined group at generation 2 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-0e6e4d29-5e91-4fc6-b8fa-0e06d1a2ae15', leaderUrl='http://192.168.90.221:8083/', offset=2, connectorIds=[dbz.logminer.source], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:33:44,277 INFO || [Worker clientId=connect-1, groupId=plaintext123] Starting connectors and tasks using config offset 2 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:33:44,278 INFO || [Worker clientId=connect-1, groupId=plaintext123] Starting connector dbz.logminer.source [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:33:44,282 INFO || ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = dbz.logminer.source tasks.max = 1 transforms = [dropTombstone] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig] 2021-02-24 10:33:44,283 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = dbz.logminer.source tasks.max = 1 transforms = [dropTombstone] transforms.dropTombstone.add.fields = [op, source.ts_ms] transforms.dropTombstone.add.headers = [] transforms.dropTombstone.delete.handling.mode = rewrite transforms.dropTombstone.drop.tombstones = true transforms.dropTombstone.route.by.field = transforms.dropTombstone.type = class io.debezium.transforms.ExtractNewRecordState value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2021-02-24 10:33:44,283 INFO || Creating connector dbz.logminer.source of type io.debezium.connector.oracle.OracleConnector [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:33:44,287 INFO || Instantiated connector dbz.logminer.source with version 1.5.0-SNAPSHOT of type class io.debezium.connector.oracle.OracleConnector [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:33:44,295 INFO || Finished creating connector dbz.logminer.source [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:33:44,297 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = dbz.logminer.source tasks.max = 1 transforms = [dropTombstone] value.converter = null [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2021-02-24 10:33:44,297 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = dbz.logminer.source tasks.max = 1 transforms = [dropTombstone] transforms.dropTombstone.add.fields = [op, source.ts_ms] transforms.dropTombstone.add.headers = [] transforms.dropTombstone.delete.handling.mode = rewrite transforms.dropTombstone.drop.tombstones = true transforms.dropTombstone.route.by.field = transforms.dropTombstone.type = class io.debezium.transforms.ExtractNewRecordState value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2021-02-24 10:33:45,280 INFO || [Worker clientId=connect-1, groupId=plaintext123] Tasks [dbz.logminer.source-0] configs updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:33:45,781 INFO || [Worker clientId=connect-1, groupId=plaintext123] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:33:45,783 INFO || [Worker clientId=connect-1, groupId=plaintext123] Handling task config update by restarting tasks [] [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:33:45,783 INFO || [Worker clientId=connect-1, groupId=plaintext123] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2021-02-24 10:33:45,783 INFO || [Worker clientId=connect-1, groupId=plaintext123] (Re-)joining group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:33:45,789 INFO || [Worker clientId=connect-1, groupId=plaintext123] Successfully joined group with generation 3 [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:33:45,789 INFO || [Worker clientId=connect-1, groupId=plaintext123] Joined group at generation 3 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-0e6e4d29-5e91-4fc6-b8fa-0e06d1a2ae15', leaderUrl='http://192.168.90.221:8083/', offset=4, connectorIds=[dbz.logminer.source], taskIds=[dbz.logminer.source-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:33:45,791 INFO || [Worker clientId=connect-1, groupId=plaintext123] Starting connectors and tasks using config offset 4 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:33:45,792 INFO || [Worker clientId=connect-1, groupId=plaintext123] Starting task dbz.logminer.source-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:33:45,792 INFO || Creating task dbz.logminer.source-0 [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:33:45,794 INFO || ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = dbz.logminer.source tasks.max = 1 transforms = [dropTombstone] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig] 2021-02-24 10:33:45,795 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = dbz.logminer.source tasks.max = 1 transforms = [dropTombstone] transforms.dropTombstone.add.fields = [op, source.ts_ms] transforms.dropTombstone.add.headers = [] transforms.dropTombstone.delete.handling.mode = rewrite transforms.dropTombstone.drop.tombstones = true transforms.dropTombstone.route.by.field = transforms.dropTombstone.type = class io.debezium.transforms.ExtractNewRecordState value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2021-02-24 10:33:45,797 INFO || TaskConfig values: task.class = class io.debezium.connector.oracle.OracleConnectorTask [org.apache.kafka.connect.runtime.TaskConfig] 2021-02-24 10:33:45,799 INFO || Instantiated task dbz.logminer.source-0 with version 1.5.0-SNAPSHOT of type io.debezium.connector.oracle.OracleConnectorTask [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:33:45,806 INFO || AvroConverterConfig values: bearer.auth.token = [hidden] proxy.port = -1 schema.reflection = false auto.register.schemas = true max.schemas.per.subject = 1000 basic.auth.credentials.source = URL value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy schema.registry.url = [http://192.168.90.221:8081] basic.auth.user.info = [hidden] proxy.host = use.latest.version = false schema.registry.basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.connect.avro.AvroConverterConfig] 2021-02-24 10:33:45,830 INFO || KafkaAvroSerializerConfig values: bearer.auth.token = [hidden] proxy.port = -1 schema.reflection = false auto.register.schemas = true max.schemas.per.subject = 1000 basic.auth.credentials.source = URL value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy schema.registry.url = [http://192.168.90.221:8081] basic.auth.user.info = [hidden] proxy.host = use.latest.version = false schema.registry.basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.kafka.serializers.KafkaAvroSerializerConfig] 2021-02-24 10:33:45,833 INFO || KafkaAvroDeserializerConfig values: bearer.auth.token = [hidden] proxy.port = -1 schema.reflection = false auto.register.schemas = true max.schemas.per.subject = 1000 basic.auth.credentials.source = URL specific.avro.reader = false value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy schema.registry.url = [http://192.168.90.221:8081] basic.auth.user.info = [hidden] proxy.host = use.latest.version = false schema.registry.basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.kafka.serializers.KafkaAvroDeserializerConfig] 2021-02-24 10:33:45,873 INFO || AvroDataConfig values: connect.meta.data = true enhanced.avro.schema.support = false schemas.cache.config = 1000 [io.confluent.connect.avro.AvroDataConfig] 2021-02-24 10:33:45,873 INFO || Set up the key converter class io.confluent.connect.avro.AvroConverter for task dbz.logminer.source-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:33:45,873 INFO || AvroConverterConfig values: bearer.auth.token = [hidden] proxy.port = -1 schema.reflection = false auto.register.schemas = true max.schemas.per.subject = 1000 basic.auth.credentials.source = URL value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy schema.registry.url = [http://192.168.90.221:8081] basic.auth.user.info = [hidden] proxy.host = use.latest.version = false schema.registry.basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.connect.avro.AvroConverterConfig] 2021-02-24 10:33:45,873 INFO || KafkaAvroSerializerConfig values: bearer.auth.token = [hidden] proxy.port = -1 schema.reflection = false auto.register.schemas = true max.schemas.per.subject = 1000 basic.auth.credentials.source = URL value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy schema.registry.url = [http://192.168.90.221:8081] basic.auth.user.info = [hidden] proxy.host = use.latest.version = false schema.registry.basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.kafka.serializers.KafkaAvroSerializerConfig] 2021-02-24 10:33:45,873 INFO || KafkaAvroDeserializerConfig values: bearer.auth.token = [hidden] proxy.port = -1 schema.reflection = false auto.register.schemas = true max.schemas.per.subject = 1000 basic.auth.credentials.source = URL specific.avro.reader = false value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy schema.registry.url = [http://192.168.90.221:8081] basic.auth.user.info = [hidden] proxy.host = use.latest.version = false schema.registry.basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.kafka.serializers.KafkaAvroDeserializerConfig] 2021-02-24 10:33:45,873 INFO || AvroDataConfig values: connect.meta.data = true enhanced.avro.schema.support = false schemas.cache.config = 1000 [io.confluent.connect.avro.AvroDataConfig] 2021-02-24 10:33:45,873 INFO || Set up the value converter class io.confluent.connect.avro.AvroConverter for task dbz.logminer.source-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:33:45,874 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task dbz.logminer.source-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:33:45,888 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{io.debezium.transforms.ExtractNewRecordState} [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:33:45,891 INFO || ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [dn90221:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = connector-producer-dbz.logminer.source-0 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 9223372036854775807 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 2147483647 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:33:45,894 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:33:45,894 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:33:45,894 INFO || Kafka startTimeMs: 1614134025894 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:33:45,896 INFO || [Producer clientId=connector-producer-dbz.logminer.source-0] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:33:45,902 INFO || [Worker clientId=connect-1, groupId=plaintext123] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:33:45,911 WARN || Using configuration property "table.blacklist" is deprecated and will be removed in future versions. Please use "table.exclude.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:33:45,911 WARN || Using configuration property "table.blacklist" is deprecated and will be removed in future versions. Please use "table.exclude.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:33:45,912 INFO || Starting OracleConnectorTask with configuration: [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || connector.class = io.debezium.connector.oracle.OracleConnector [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || tasks.max = 1 [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || database.history.kafka.topic = history.dbz.logminer [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || transforms = dropTombstone [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || database.tablename.case.insensitive = true [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || transforms.dropTombstone.type = io.debezium.transforms.ExtractNewRecordState [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || transforms.dropTombstone.add.fields = op,source.ts_ms [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || log.mining.strategy = online_catalog [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || include.schema.changes = false [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || transforms.dropTombstone.drop.tombstones = true [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || database.schema = wangbing [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || database.oracle.version = 11 [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || transforms.dropTombstone.delete.handling.mode = rewrite [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || database.user = wangbing [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || database.dbname = XE [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || database.connection.adapter = logminer [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || database.history.kafka.bootstrap.servers = dn90221:9092 [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || database.server.name = dbz.logminer [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || database.port = 1523 [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || task.class = io.debezium.connector.oracle.OracleConnectorTask [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || database.hostname = 192.168.90.45 [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || database.password = ******** [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || name = dbz.logminer.source [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || table.include.list = wangbing.test [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,914 INFO || snapshot.mode = schema_only [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:33:45,921 WARN || Using configuration property "database.whitelist" is deprecated and will be removed in future versions. Please use "database.include.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:33:45,921 WARN || Using configuration property "database.blacklist" is deprecated and will be removed in future versions. Please use "database.exclude.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:33:45,921 WARN || Using configuration property "schema.whitelist" is deprecated and will be removed in future versions. Please use "schema.include.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:33:45,921 WARN || Using configuration property "schema.blacklist" is deprecated and will be removed in future versions. Please use "schema.exclude.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:33:45,924 WARN || Using configuration property "table.blacklist" is deprecated and will be removed in future versions. Please use "table.exclude.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:33:45,925 WARN || Using configuration property "database.whitelist" is deprecated and will be removed in future versions. Please use "database.include.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:33:45,925 WARN || Using configuration property "database.blacklist" is deprecated and will be removed in future versions. Please use "database.exclude.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:33:45,925 WARN || Using configuration property "column.blacklist" is deprecated and will be removed in future versions. Please use "column.exclude.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:33:46,000 INFO || KafkaDatabaseHistory Consumer config: {enable.auto.commit=false, value.deserializer=org.apache.kafka.common.serialization.StringDeserializer, group.id=dbz.logminer-dbhistory, auto.offset.reset=earliest, session.timeout.ms=10000, bootstrap.servers=dn90221:9092, client.id=dbz.logminer-dbhistory, key.deserializer=org.apache.kafka.common.serialization.StringDeserializer, fetch.min.bytes=1} [io.debezium.relational.history.KafkaDatabaseHistory] 2021-02-24 10:33:46,000 INFO || KafkaDatabaseHistory Producer config: {bootstrap.servers=dn90221:9092, value.serializer=org.apache.kafka.common.serialization.StringSerializer, buffer.memory=1048576, retries=1, key.serializer=org.apache.kafka.common.serialization.StringSerializer, client.id=dbz.logminer-dbhistory, linger.ms=0, batch.size=32768, max.block.ms=10000, acks=1} [io.debezium.relational.history.KafkaDatabaseHistory] 2021-02-24 10:33:46,002 INFO || Requested thread factory for connector OracleConnector, id = dbz.logminer named = db-history-config-check [io.debezium.util.Threads] 2021-02-24 10:33:46,005 INFO || ProducerConfig values: acks = 1 batch.size = 32768 bootstrap.servers = [dn90221:9092] buffer.memory = 1048576 client.dns.lookup = default client.id = dbz.logminer-dbhistory compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 10000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 1 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:33:46,008 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:33:46,008 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:33:46,008 INFO || Kafka startTimeMs: 1614134026008 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:33:46,010 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [dn90221:9092] check.crcs = true client.dns.lookup = default client.id = dbz.logminer-dbhistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = dbz.logminer-dbhistory group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:33:46,012 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:33:46,012 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:33:46,012 INFO || Kafka startTimeMs: 1614134026012 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:33:46,013 INFO || [Producer clientId=dbz.logminer-dbhistory] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:33:46,015 INFO || [Consumer clientId=dbz.logminer-dbhistory, groupId=dbz.logminer-dbhistory] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:33:46,019 INFO || AdminClientConfig values: bootstrap.servers = [dn90221:9092] client.dns.lookup = default client.id = dbz.logminer-dbhistory connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 1 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:33:46,020 WARN || The configuration 'value.serializer' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:33:46,020 WARN || The configuration 'batch.size' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:33:46,020 WARN || The configuration 'max.block.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:33:46,020 WARN || The configuration 'acks' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:33:46,020 WARN || The configuration 'buffer.memory' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:33:46,020 WARN || The configuration 'key.serializer' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:33:46,020 WARN || The configuration 'linger.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:33:46,020 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:33:46,020 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:33:46,020 INFO || Kafka startTimeMs: 1614134026020 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:33:46,043 INFO || Database history topic '(name=history.dbz.logminer, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=delete, retention.ms=9223372036854775807, retention.bytes=-1})' created [io.debezium.relational.history.KafkaDatabaseHistory] 2021-02-24 10:33:46,429 INFO || Requested thread factory for connector OracleConnector, id = dbz.logminer named = change-event-source-coordinator [io.debezium.util.Threads] 2021-02-24 10:33:46,431 INFO || Creating thread debezium-oracleconnector-dbz.logminer-change-event-source-coordinator [io.debezium.util.Threads] 2021-02-24 10:33:46,432 INFO || WorkerSourceTask{id=dbz.logminer.source-0} Source task finished initialization and start [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:33:46,434 INFO || Metrics registered [io.debezium.pipeline.ChangeEventSourceCoordinator] 2021-02-24 10:33:46,435 INFO || Context created [io.debezium.pipeline.ChangeEventSourceCoordinator] 2021-02-24 10:33:46,442 INFO || Snapshot step 1 - Preparing [io.debezium.relational.RelationalSnapshotChangeEventSource] 2021-02-24 10:33:46,649 INFO || Snapshot step 2 - Determining captured tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2021-02-24 10:33:46,909 TRACE || TableIds are: [XE.WANGBING.wbtest, XE.WANGBING.test, XE.WANGBING.LOG_MINING_FLUSH, XE.WANGBING.DBZORACLE] [io.debezium.connector.oracle.OracleConnection] 2021-02-24 10:33:46,912 INFO || Snapshot step 3 - Locking captured tables [XE.WANGBING.test] [io.debezium.relational.RelationalSnapshotChangeEventSource] 2021-02-24 10:33:46,915 DEBUG || Locking table XE.WANGBING.test [io.debezium.connector.oracle.OracleSnapshotChangeEventSource] 2021-02-24 10:33:46,919 INFO || Snapshot step 4 - Determining snapshot offset [io.debezium.relational.RelationalSnapshotChangeEventSource] 2021-02-24 10:33:46,924 INFO || No latest table SCN could be resolved, defaulting to current SCN [io.debezium.connector.oracle.OracleSnapshotChangeEventSource] 2021-02-24 10:33:46,938 INFO || Snapshot step 5 - Reading structure of captured tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2021-02-24 10:33:46,980 INFO || Snapshot step 6 - Persisting schema history [io.debezium.relational.RelationalSnapshotChangeEventSource] 2021-02-24 10:33:47,115 DEBUG || Applying schema change event SchemaChangeEvent [database=XE, schema=WANGBING, ddl= CREATE TABLE "WANGBING"."test" ( "ID" NUMBER(10,0), "NAME" VARCHAR2(255) ) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" , tables=[columns: { ID NUMBER(10, 0) DEFAULT VALUE NULL NAME VARCHAR2(255) DEFAULT VALUE NULL } primary key: [] default charset: null ], type=CREATE] [io.debezium.connector.oracle.OracleDatabaseSchema] 2021-02-24 10:33:47,121 DEBUG || Building schema for column ID of type 3 named NUMBER with constraints (10,Optional[0]) [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:33:47,123 INFO || JdbcValueConverters returned 'org.apache.kafka.connect.data.SchemaBuilder' for column 'ID' [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:33:47,123 DEBUG || Building schema for column NAME of type 12 named VARCHAR2 with constraints (255,Optional.empty) [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:33:47,123 INFO || JdbcValueConverters returned 'org.apache.kafka.connect.data.SchemaBuilder' for column 'NAME' [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:33:47,156 INFO || Snapshot step 7 - Skipping snapshotting of data [io.debezium.relational.RelationalSnapshotChangeEventSource] 2021-02-24 10:33:47,158 INFO || Snapshot - Final stage [io.debezium.pipeline.source.AbstractSnapshotChangeEventSource] 2021-02-24 10:33:47,158 INFO || Snapshot ended with SnapshotResult [status=COMPLETED, offset=OracleOffsetContext [scn=13951117]] [io.debezium.pipeline.ChangeEventSourceCoordinator] 2021-02-24 10:33:47,165 INFO || Connected metrics set to 'true' [io.debezium.pipeline.metrics.StreamingChangeEventSourceMetrics] 2021-02-24 10:33:47,166 INFO || Starting streaming [io.debezium.pipeline.ChangeEventSourceCoordinator] 2021-02-24 10:33:47,169 INFO || Requested thread factory for connector OracleConnector, id = dbz.logminer named = transactional-buffer [io.debezium.util.Threads] 2021-02-24 10:33:47,171 INFO || Logminer metrics initialized LogMinerMetrics{currentScn=-1, logMinerQueryCount=0, totalCapturedDmlCount=0, totalDurationOfFetchingQuery=PT0S, lastCapturedDmlCount=0, lastDurationOfFetchingQuery=PT0S, maxCapturedDmlCount=0, maxDurationOfFetchingQuery=PT0S, totalBatchProcessingDuration=PT0S, lastBatchProcessingDuration=PT0S, maxBatchProcessingDuration=PT0S, maxBatchProcessingThroughput=0, currentLogFileName=null, redoLogStatus=null, switchCounter=0, batchSize=20000, millisecondToSleepBetweenMiningQuery=1000, recordMiningHistory=false, hoursToKeepTransaction=4, networkConnectionProblemsCounter0, batchSizeDefault=20000, batchSizeMin=1000, batchSizeMax=100000, sleepTimeDefault=1000, sleepTimeMin=0, sleepTimeMax=3000, sleepTimeIncrement=200} [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:33:47,176 TRACE || Current time 1614134027175 ms, database difference -3 ms [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:33:47,180 TRACE || Getting first scn of all online logs [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:47,185 TRACE || First SCN in online logs is 375676 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:47,187 TRACE || Getting online redo logs for offset scn 13951117 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:47,196 TRACE || Online redo log /u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log with SCN range 13914724 to 281474976710655 to be added. [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:47,196 TRACE || Online redo log /u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_2_g6c5nj25_.log with SCN range 13905074 to 13914724 to be excluded. [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:47,206 TRACE || Adding log file /u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log to mining session [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:47,225 DEBUG || Last mined SCN: 13951117, Log file list to mine: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:47,229 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:47,243 DEBUG || Updating sleep time window. Sleep time 1200. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:33:47,244 TRACE || Updating LOG_MINING_FLUSH with SCN 13951124 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:48,474 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:48,484 TRACE || Starting log mining startScn=13951117, endScn=13951123, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:48,558 TRACE || scn=13951119, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:33:48,558 DEBUG || Transactional buffer empty, updating offset's SCN 13951123 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:33:48,559 DEBUG || Updating sleep time window. Sleep time 1400. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:33:48,560 TRACE || Updating LOG_MINING_FLUSH with SCN 13951129 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:49,999 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:50,011 TRACE || Starting log mining startScn=13951123, endScn=13951128, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:50,067 TRACE || scn=13951126, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:33:50,067 DEBUG || Transactional buffer empty, updating offset's SCN 13951128 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:33:50,068 DEBUG || Updating sleep time window. Sleep time 1600. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:33:50,069 TRACE || Updating LOG_MINING_FLUSH with SCN 13951134 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:51,698 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:51,707 TRACE || Starting log mining startScn=13951128, endScn=13951133, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:51,762 TRACE || scn=13951131, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:33:51,762 DEBUG || Transactional buffer empty, updating offset's SCN 13951133 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:33:51,763 DEBUG || Updating sleep time window. Sleep time 1800. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:33:51,764 TRACE || Updating LOG_MINING_FLUSH with SCN 13951139 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:53,590 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:53,603 TRACE || Starting log mining startScn=13951133, endScn=13951138, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:53,658 TRACE || scn=13951136, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:33:53,659 DEBUG || Transactional buffer empty, updating offset's SCN 13951138 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:33:53,660 DEBUG || Updating sleep time window. Sleep time 2000. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:33:53,661 TRACE || Updating LOG_MINING_FLUSH with SCN 13951144 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:55,677 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:55,687 TRACE || Starting log mining startScn=13951138, endScn=13951143, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:55,743 TRACE || scn=13951141, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:33:55,743 DEBUG || Transactional buffer empty, updating offset's SCN 13951143 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:33:55,744 DEBUG || Updating sleep time window. Sleep time 2200. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:33:55,744 TRACE || Updating LOG_MINING_FLUSH with SCN 13951149 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:55,901 INFO || WorkerSourceTask{id=dbz.logminer.source-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:33:55,902 INFO || WorkerSourceTask{id=dbz.logminer.source-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:33:57,980 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:57,991 TRACE || Starting log mining startScn=13951143, endScn=13951148, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:33:58,046 TRACE || scn=13951146, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:33:58,046 DEBUG || Transactional buffer empty, updating offset's SCN 13951148 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:33:58,047 DEBUG || Updating sleep time window. Sleep time 2400. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:33:58,048 TRACE || Updating LOG_MINING_FLUSH with SCN 13951154 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:00,479 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:00,490 TRACE || Starting log mining startScn=13951148, endScn=13951153, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:00,541 TRACE || scn=13951151, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:00,541 DEBUG || Transactional buffer empty, updating offset's SCN 13951153 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:34:00,542 DEBUG || Updating sleep time window. Sleep time 2600. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:34:00,543 TRACE || Updating LOG_MINING_FLUSH with SCN 13951159 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:03,171 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:03,183 TRACE || Starting log mining startScn=13951153, endScn=13951158, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:03,235 TRACE || scn=13951156, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:03,235 DEBUG || Transactional buffer empty, updating offset's SCN 13951158 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:34:03,236 DEBUG || Updating sleep time window. Sleep time 2800. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:34:03,237 TRACE || Updating LOG_MINING_FLUSH with SCN 13951164 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:05,903 INFO || WorkerSourceTask{id=dbz.logminer.source-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:34:05,903 INFO || WorkerSourceTask{id=dbz.logminer.source-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:34:06,048 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:06,056 TRACE || Starting log mining startScn=13951158, endScn=13951163, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:06,115 TRACE || scn=13951161, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:06,115 DEBUG || Transactional buffer empty, updating offset's SCN 13951163 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:34:06,116 DEBUG || Updating sleep time window. Sleep time 3000. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:34:06,116 TRACE || Updating LOG_MINING_FLUSH with SCN 13951169 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:09,146 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:09,157 TRACE || Starting log mining startScn=13951163, endScn=13951168, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:09,210 TRACE || scn=13951166, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:09,210 DEBUG || Transactional buffer empty, updating offset's SCN 13951168 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:34:09,211 DEBUG || Updating sleep time window. Sleep time 2800. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:34:09,212 TRACE || Updating LOG_MINING_FLUSH with SCN 13951174 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:12,032 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:12,042 TRACE || Starting log mining startScn=13951168, endScn=13951173, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:12,095 TRACE || scn=13951171, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:12,095 DEBUG || Transactional buffer empty, updating offset's SCN 13951173 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:34:12,096 DEBUG || Updating sleep time window. Sleep time 3000. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:34:12,097 TRACE || Updating LOG_MINING_FLUSH with SCN 13951179 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:15,111 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:15,123 TRACE || Starting log mining startScn=13951173, endScn=13951178, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:15,179 TRACE || scn=13951176, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:15,180 DEBUG || Transactional buffer empty, updating offset's SCN 13951178 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:34:15,181 DEBUG || Updating sleep time window. Sleep time 2800. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:34:15,181 TRACE || Updating LOG_MINING_FLUSH with SCN 13951186 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:15,903 INFO || WorkerSourceTask{id=dbz.logminer.source-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:34:15,903 INFO || WorkerSourceTask{id=dbz.logminer.source-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:34:18,013 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:18,025 TRACE || Starting log mining startScn=13951178, endScn=13951185, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:18,082 TRACE || scn=13951181, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:18,082 TRACE || scn=13951182, operationCode=1, operation=INSERT, table=test, segOwner=WANGBING, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:18,083 TRACE || DML, transactionId=08001100EE240000, SCN=13951182, table_name=test, segOwner=WANGBING, operationCode=1, offsetSCN=13951178, commitOffsetSCN=null, sql insert into "WANGBING"."test"("ID","NAME") values ('1','bing'); [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:18,121 TRACE || Building schema for column ID of type 3 named NUMBER with constraints (10,Optional[0]) [io.debezium.connector.oracle.logminer.OracleChangeRecordValueConverter] 2021-02-24 10:34:18,122 TRACE || Value from data object: *** 1 *** [io.debezium.connector.oracle.logminer.OracleChangeRecordValueConverter] 2021-02-24 10:34:18,122 TRACE || Callback is: io.debezium.jdbc.JdbcValueConverters$$Lambda$764/109238785@6d653552 [io.debezium.connector.oracle.logminer.OracleChangeRecordValueConverter] 2021-02-24 10:34:18,122 TRACE || Value from ResultReceiver: [received = true, object = 1] [io.debezium.connector.oracle.logminer.OracleChangeRecordValueConverter] 2021-02-24 10:34:18,122 TRACE || Building schema for column NAME of type 12 named VARCHAR2 with constraints (255,Optional.empty) [io.debezium.connector.oracle.logminer.OracleChangeRecordValueConverter] 2021-02-24 10:34:18,123 TRACE || Value from data object: *** bing *** [io.debezium.connector.oracle.logminer.OracleChangeRecordValueConverter] 2021-02-24 10:34:18,123 TRACE || Callback is: io.debezium.jdbc.JdbcValueConverters$$Lambda$766/1197864560@3ee846b [io.debezium.connector.oracle.logminer.OracleChangeRecordValueConverter] 2021-02-24 10:34:18,123 TRACE || Value from ResultReceiver: [received = true, object = bing] [io.debezium.connector.oracle.logminer.OracleChangeRecordValueConverter] 2021-02-24 10:34:18,126 TRACE || scn=13951183, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:18,127 TRACE || COMMIT, transactionId=08001100EE240000, SCN=13951183, table_name=null, segOwner=null, operationCode=7, offsetSCN=13951178, commitOffsetSCN=null, smallest SCN: null, largest SCN 13951182 [io.debezium.connector.oracle.logminer.TransactionalBuffer] 2021-02-24 10:34:18,128 INFO || Creating thread debezium-oracleconnector-dbz.logminer-transactional-buffer-0 [io.debezium.util.Threads] 2021-02-24 10:34:18,128 TRACE || COMMIT, transactionId=08001100EE240000, SCN=13951183, table_name=null, segOwner=null, operationCode=7, offsetSCN=13951178, commitOffsetSCN=null [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:18,128 DEBUG || 1 DMLs, 1 Commits, 0 Rollbacks, 1 Inserts, 0 Updates, 0 Deletes. Processed in 46 millis. Lag:6128. Offset scn:13951182. Offset commit scn:13951183. Active transactions:1. Sleep time:2800 [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:18,128 TRACE || Processing DML event io.debezium.connector.oracle.logminer.valueholder.LogMinerDmlEntryImpl@7a3cf2ec scn 13951182 [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:18,130 DEBUG || Updating sleep time window. Sleep time 3000. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:34:18,131 TRACE || Updating LOG_MINING_FLUSH with SCN 13951191 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:18,134 TRACE || Value from data object: *** 1 *** [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:34:18,134 TRACE || Callback is: io.debezium.jdbc.JdbcValueConverters$$Lambda$764/109238785@325d6354 [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:34:18,134 TRACE || Value from ResultReceiver: [received = true, object = 1] [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:34:18,134 TRACE || Value from data object: *** bing *** [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:34:18,134 TRACE || Callback is: io.debezium.jdbc.JdbcValueConverters$$Lambda$766/1197864560@491493d [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:34:18,134 TRACE || Value from ResultReceiver: [received = true, object = bing] [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:34:18,424 INFO || 1 records sent during previous 00:00:32.623, last recorded offset: {commit_scn=13951183, transaction_id=null, scn=13951182} [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:34:18,894 WARN || [Producer clientId=connector-producer-dbz.logminer.source-0] Error while fetching metadata with correlation id 3 : {dbz.logminer.WANGBING.test=LEADER_NOT_AVAILABLE} [org.apache.kafka.clients.NetworkClient] 2021-02-24 10:34:21,167 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:21,177 TRACE || Starting log mining startScn=13951185, endScn=13951190, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:21,231 TRACE || scn=13951188, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:21,231 TRACE || Resetting largest SCN in transaction buffer to 13951190, nextStartScn=13951182, startScn=13951185 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:34:21,231 DEBUG || Transactional buffer empty, updating offset's SCN 13951190 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:34:21,232 DEBUG || Updating sleep time window. Sleep time 2800. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:34:21,233 TRACE || Updating LOG_MINING_FLUSH with SCN 13951196 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:24,055 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:24,065 TRACE || Starting log mining startScn=13951190, endScn=13951195, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:24,116 TRACE || scn=13951193, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:24,117 DEBUG || Transactional buffer empty, updating offset's SCN 13951195 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:34:24,117 DEBUG || Updating sleep time window. Sleep time 3000. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:34:24,118 TRACE || Updating LOG_MINING_FLUSH with SCN 13951201 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:25,904 INFO || WorkerSourceTask{id=dbz.logminer.source-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:34:25,904 INFO || WorkerSourceTask{id=dbz.logminer.source-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:34:25,916 INFO || WorkerSourceTask{id=dbz.logminer.source-0} Finished commitOffsets successfully in 12 ms [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:34:27,147 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:27,157 TRACE || Starting log mining startScn=13951195, endScn=13951200, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:27,211 TRACE || scn=13951198, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:27,211 DEBUG || Transactional buffer empty, updating offset's SCN 13951200 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:34:27,212 DEBUG || Updating sleep time window. Sleep time 2800. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:34:27,213 TRACE || Updating LOG_MINING_FLUSH with SCN 13951206 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:30,048 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:30,059 TRACE || Starting log mining startScn=13951200, endScn=13951205, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:30,116 TRACE || scn=13951203, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:30,116 DEBUG || Transactional buffer empty, updating offset's SCN 13951205 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:34:30,117 DEBUG || Updating sleep time window. Sleep time 3000. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:34:30,118 TRACE || Updating LOG_MINING_FLUSH with SCN 13951211 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:33,141 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:33,150 TRACE || Starting log mining startScn=13951205, endScn=13951210, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:33,207 TRACE || scn=13951208, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:33,207 DEBUG || Transactional buffer empty, updating offset's SCN 13951210 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:34:33,208 DEBUG || Updating sleep time window. Sleep time 2800. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:34:33,208 TRACE || Updating LOG_MINING_FLUSH with SCN 13951216 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:35,916 INFO || WorkerSourceTask{id=dbz.logminer.source-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:34:35,916 INFO || WorkerSourceTask{id=dbz.logminer.source-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:34:36,032 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:36,044 TRACE || Starting log mining startScn=13951210, endScn=13951215, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:36,096 TRACE || scn=13951213, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:36,097 DEBUG || Transactional buffer empty, updating offset's SCN 13951215 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:34:36,098 DEBUG || Updating sleep time window. Sleep time 3000. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:34:36,099 TRACE || Updating LOG_MINING_FLUSH with SCN 13951221 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:36,571 INFO || Successfully processed removal of connector 'dbz.logminer.source' [org.apache.kafka.connect.storage.KafkaConfigBackingStore] 2021-02-24 10:34:36,571 INFO || [Worker clientId=connect-1, groupId=plaintext123] Connector dbz.logminer.source config removed [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:34:37,072 INFO || [Worker clientId=connect-1, groupId=plaintext123] Handling connector-only config update by stopping connector dbz.logminer.source [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:34:37,072 INFO || Stopping connector dbz.logminer.source [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:34:37,074 INFO || Stopped connector dbz.logminer.source [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:34:37,074 INFO || [Worker clientId=connect-1, groupId=plaintext123] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2021-02-24 10:34:37,074 INFO || [Worker clientId=connect-1, groupId=plaintext123] (Re-)joining group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:34:37,079 INFO || [Worker clientId=connect-1, groupId=plaintext123] Successfully joined group with generation 4 [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:34:37,081 INFO || Stopping connector dbz.logminer.source [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:34:37,081 WARN || Ignoring stop request for unowned connector dbz.logminer.source [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:34:37,081 INFO || Stopping task dbz.logminer.source-0 [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:34:37,081 INFO || Stopping down connector [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:34:37,422 INFO || WorkerSourceTask{id=dbz.logminer.source-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:34:39,127 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:39,138 TRACE || Starting log mining startScn=13951215, endScn=13951220, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:34:39,195 TRACE || scn=13951218, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:34:39,195 DEBUG || Transactional buffer empty, updating offset's SCN 13951220 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:34:39,197 INFO || startScn=13951220, endScn=13951220, offsetContext.getScn()=13951220 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:34:39,197 INFO || Transactional buffer metrics dump: TransactionalBufferMetrics{oldestScn=13951182, committedScn=13951183, offsetScn=13951182, lagFromTheSource=PT6.128S, activeTransactions=0, rolledBackTransactions=0, committedTransactions=1, lastCommitDuration=13, maxCommitDuration=13, registeredDmlCounter=1, committedDmlCounter=1, maxLagFromTheSource=PT6.128S, minLagFromTheSource=PT0S, abandonedTransactionIds=[], errorCounter=0, warningCounter=0, scnFreezeCounter=0, commitQueueCapacity=8192} [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:34:39,198 INFO || Transactional buffer dump: [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:34:39,198 INFO || LogMiner metrics dump: LogMinerMetrics{currentScn=13951220, logMinerQueryCount=20, totalCapturedDmlCount=1, totalDurationOfFetchingQuery=PT1.031861309S, lastCapturedDmlCount=1, lastDurationOfFetchingQuery=PT0.052565145S, maxCapturedDmlCount=1, maxDurationOfFetchingQuery=PT0.061000523S, totalBatchProcessingDuration=PT0.046S, lastBatchProcessingDuration=PT0.046S, maxBatchProcessingDuration=PT0.046S, maxBatchProcessingThroughput=22, currentLogFileName=[Ljava.lang.String;@ed49aef, redoLogStatus=[Ljava.lang.String;@3f0b864f, switchCounter=0, batchSize=20000, millisecondToSleepBetweenMiningQuery=3000, recordMiningHistory=false, hoursToKeepTransaction=4, networkConnectionProblemsCounter0, batchSizeDefault=20000, batchSizeMin=1000, batchSizeMax=100000, sleepTimeDefault=1000, sleepTimeMin=0, sleepTimeMax=3000, sleepTimeIncrement=200} [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:34:39,198 INFO || Finished streaming [io.debezium.pipeline.ChangeEventSourceCoordinator] 2021-02-24 10:34:39,198 INFO || Connected metrics set to 'false' [io.debezium.pipeline.metrics.StreamingChangeEventSourceMetrics] 2021-02-24 10:34:39,201 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] 2021-02-24 10:34:39,202 INFO || [Producer clientId=dbz.logminer-dbhistory] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer] 2021-02-24 10:34:39,205 INFO || WorkerSourceTask{id=dbz.logminer.source-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:34:39,205 INFO || [Producer clientId=connector-producer-dbz.logminer.source-0] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer] 2021-02-24 10:34:39,210 INFO || [Worker clientId=connect-1, groupId=plaintext123] Finished stopping tasks in preparation for rebalance [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:34:39,215 INFO || [Worker clientId=connect-1, groupId=plaintext123] Finished flushing status backing store in preparation for rebalance [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:34:39,216 INFO || [Worker clientId=connect-1, groupId=plaintext123] Joined group at generation 4 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-0e6e4d29-5e91-4fc6-b8fa-0e06d1a2ae15', leaderUrl='http://192.168.90.221:8083/', offset=6, connectorIds=[], taskIds=[], revokedConnectorIds=[dbz.logminer.source], revokedTaskIds=[dbz.logminer.source-0], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:34:39,216 INFO || [Worker clientId=connect-1, groupId=plaintext123] Starting connectors and tasks using config offset 6 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:34:39,216 INFO || [Worker clientId=connect-1, groupId=plaintext123] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:34:39,216 INFO || [Worker clientId=connect-1, groupId=plaintext123] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2021-02-24 10:34:39,216 INFO || [Worker clientId=connect-1, groupId=plaintext123] (Re-)joining group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:34:39,220 INFO || [Worker clientId=connect-1, groupId=plaintext123] Successfully joined group with generation 5 [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:34:39,220 INFO || [Worker clientId=connect-1, groupId=plaintext123] Joined group at generation 5 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-0e6e4d29-5e91-4fc6-b8fa-0e06d1a2ae15', leaderUrl='http://192.168.90.221:8083/', offset=6, connectorIds=[], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:34:39,220 INFO || [Worker clientId=connect-1, groupId=plaintext123] Starting connectors and tasks using config offset 6 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:34:39,220 INFO || [Worker clientId=connect-1, groupId=plaintext123] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:35:04,542 INFO || AbstractConfig values: [org.apache.kafka.common.config.AbstractConfig] 2021-02-24 10:35:04,546 INFO || [Worker clientId=connect-1, groupId=plaintext123] Connector dbz.logminer.source config updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:35:05,047 INFO || [Worker clientId=connect-1, groupId=plaintext123] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2021-02-24 10:35:05,047 INFO || [Worker clientId=connect-1, groupId=plaintext123] (Re-)joining group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:35:05,051 INFO || [Worker clientId=connect-1, groupId=plaintext123] Successfully joined group with generation 6 [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:35:05,051 INFO || [Worker clientId=connect-1, groupId=plaintext123] Joined group at generation 6 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-0e6e4d29-5e91-4fc6-b8fa-0e06d1a2ae15', leaderUrl='http://192.168.90.221:8083/', offset=7, connectorIds=[dbz.logminer.source], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:35:05,051 INFO || [Worker clientId=connect-1, groupId=plaintext123] Starting connectors and tasks using config offset 7 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:35:05,051 INFO || [Worker clientId=connect-1, groupId=plaintext123] Starting connector dbz.logminer.source [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:35:05,052 INFO || ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = dbz.logminer.source tasks.max = 1 transforms = [dropTombstone] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig] 2021-02-24 10:35:05,053 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = dbz.logminer.source tasks.max = 1 transforms = [dropTombstone] transforms.dropTombstone.add.fields = [op, source.ts_ms] transforms.dropTombstone.add.headers = [] transforms.dropTombstone.delete.handling.mode = rewrite transforms.dropTombstone.drop.tombstones = true transforms.dropTombstone.route.by.field = transforms.dropTombstone.type = class io.debezium.transforms.ExtractNewRecordState value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2021-02-24 10:35:05,053 INFO || Creating connector dbz.logminer.source of type io.debezium.connector.oracle.OracleConnector [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:35:05,053 INFO || Instantiated connector dbz.logminer.source with version 1.5.0-SNAPSHOT of type class io.debezium.connector.oracle.OracleConnector [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:35:05,054 INFO || Finished creating connector dbz.logminer.source [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:35:05,054 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = dbz.logminer.source tasks.max = 1 transforms = [dropTombstone] value.converter = null [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2021-02-24 10:35:05,055 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = dbz.logminer.source tasks.max = 1 transforms = [dropTombstone] transforms.dropTombstone.add.fields = [op, source.ts_ms] transforms.dropTombstone.add.headers = [] transforms.dropTombstone.delete.handling.mode = rewrite transforms.dropTombstone.drop.tombstones = true transforms.dropTombstone.route.by.field = transforms.dropTombstone.type = class io.debezium.transforms.ExtractNewRecordState value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2021-02-24 10:35:06,062 INFO || [Worker clientId=connect-1, groupId=plaintext123] Tasks [dbz.logminer.source-0] configs updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:35:06,564 INFO || [Worker clientId=connect-1, groupId=plaintext123] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:35:06,565 INFO || [Worker clientId=connect-1, groupId=plaintext123] Handling task config update by restarting tasks [] [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:35:06,565 INFO || [Worker clientId=connect-1, groupId=plaintext123] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2021-02-24 10:35:06,565 INFO || [Worker clientId=connect-1, groupId=plaintext123] (Re-)joining group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:35:06,569 INFO || [Worker clientId=connect-1, groupId=plaintext123] Successfully joined group with generation 7 [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:35:06,569 INFO || [Worker clientId=connect-1, groupId=plaintext123] Joined group at generation 7 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-0e6e4d29-5e91-4fc6-b8fa-0e06d1a2ae15', leaderUrl='http://192.168.90.221:8083/', offset=9, connectorIds=[dbz.logminer.source], taskIds=[dbz.logminer.source-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:35:06,570 INFO || [Worker clientId=connect-1, groupId=plaintext123] Starting connectors and tasks using config offset 9 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:35:06,570 INFO || [Worker clientId=connect-1, groupId=plaintext123] Starting task dbz.logminer.source-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:35:06,571 INFO || Creating task dbz.logminer.source-0 [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:35:06,571 INFO || ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = dbz.logminer.source tasks.max = 1 transforms = [dropTombstone] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig] 2021-02-24 10:35:06,571 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = dbz.logminer.source tasks.max = 1 transforms = [dropTombstone] transforms.dropTombstone.add.fields = [op, source.ts_ms] transforms.dropTombstone.add.headers = [] transforms.dropTombstone.delete.handling.mode = rewrite transforms.dropTombstone.drop.tombstones = true transforms.dropTombstone.route.by.field = transforms.dropTombstone.type = class io.debezium.transforms.ExtractNewRecordState value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2021-02-24 10:35:06,571 INFO || TaskConfig values: task.class = class io.debezium.connector.oracle.OracleConnectorTask [org.apache.kafka.connect.runtime.TaskConfig] 2021-02-24 10:35:06,572 INFO || Instantiated task dbz.logminer.source-0 with version 1.5.0-SNAPSHOT of type io.debezium.connector.oracle.OracleConnectorTask [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:35:06,572 INFO || AvroConverterConfig values: bearer.auth.token = [hidden] proxy.port = -1 schema.reflection = false auto.register.schemas = true max.schemas.per.subject = 1000 basic.auth.credentials.source = URL value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy schema.registry.url = [http://192.168.90.221:8081] basic.auth.user.info = [hidden] proxy.host = use.latest.version = false schema.registry.basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.connect.avro.AvroConverterConfig] 2021-02-24 10:35:06,572 INFO || KafkaAvroSerializerConfig values: bearer.auth.token = [hidden] proxy.port = -1 schema.reflection = false auto.register.schemas = true max.schemas.per.subject = 1000 basic.auth.credentials.source = URL value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy schema.registry.url = [http://192.168.90.221:8081] basic.auth.user.info = [hidden] proxy.host = use.latest.version = false schema.registry.basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.kafka.serializers.KafkaAvroSerializerConfig] 2021-02-24 10:35:06,573 INFO || KafkaAvroDeserializerConfig values: bearer.auth.token = [hidden] proxy.port = -1 schema.reflection = false auto.register.schemas = true max.schemas.per.subject = 1000 basic.auth.credentials.source = URL specific.avro.reader = false value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy schema.registry.url = [http://192.168.90.221:8081] basic.auth.user.info = [hidden] proxy.host = use.latest.version = false schema.registry.basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.kafka.serializers.KafkaAvroDeserializerConfig] 2021-02-24 10:35:06,573 INFO || AvroDataConfig values: connect.meta.data = true enhanced.avro.schema.support = false schemas.cache.config = 1000 [io.confluent.connect.avro.AvroDataConfig] 2021-02-24 10:35:06,573 INFO || Set up the key converter class io.confluent.connect.avro.AvroConverter for task dbz.logminer.source-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:35:06,573 INFO || AvroConverterConfig values: bearer.auth.token = [hidden] proxy.port = -1 schema.reflection = false auto.register.schemas = true max.schemas.per.subject = 1000 basic.auth.credentials.source = URL value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy schema.registry.url = [http://192.168.90.221:8081] basic.auth.user.info = [hidden] proxy.host = use.latest.version = false schema.registry.basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.connect.avro.AvroConverterConfig] 2021-02-24 10:35:06,573 INFO || KafkaAvroSerializerConfig values: bearer.auth.token = [hidden] proxy.port = -1 schema.reflection = false auto.register.schemas = true max.schemas.per.subject = 1000 basic.auth.credentials.source = URL value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy schema.registry.url = [http://192.168.90.221:8081] basic.auth.user.info = [hidden] proxy.host = use.latest.version = false schema.registry.basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.kafka.serializers.KafkaAvroSerializerConfig] 2021-02-24 10:35:06,573 INFO || KafkaAvroDeserializerConfig values: bearer.auth.token = [hidden] proxy.port = -1 schema.reflection = false auto.register.schemas = true max.schemas.per.subject = 1000 basic.auth.credentials.source = URL specific.avro.reader = false value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy schema.registry.url = [http://192.168.90.221:8081] basic.auth.user.info = [hidden] proxy.host = use.latest.version = false schema.registry.basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.kafka.serializers.KafkaAvroDeserializerConfig] 2021-02-24 10:35:06,574 INFO || AvroDataConfig values: connect.meta.data = true enhanced.avro.schema.support = false schemas.cache.config = 1000 [io.confluent.connect.avro.AvroDataConfig] 2021-02-24 10:35:06,574 INFO || Set up the value converter class io.confluent.connect.avro.AvroConverter for task dbz.logminer.source-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:35:06,574 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task dbz.logminer.source-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:35:06,575 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{io.debezium.transforms.ExtractNewRecordState} [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:35:06,575 INFO || ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [dn90221:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = connector-producer-dbz.logminer.source-0 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 9223372036854775807 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 2147483647 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:35:06,578 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:06,578 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:06,578 INFO || Kafka startTimeMs: 1614134106578 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:06,579 INFO || [Worker clientId=connect-1, groupId=plaintext123] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:35:06,580 WARN || Using configuration property "table.blacklist" is deprecated and will be removed in future versions. Please use "table.exclude.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:35:06,581 WARN || Using configuration property "table.blacklist" is deprecated and will be removed in future versions. Please use "table.exclude.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:35:06,581 INFO || Starting OracleConnectorTask with configuration: [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || connector.class = io.debezium.connector.oracle.OracleConnector [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || tasks.max = 1 [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || database.history.kafka.topic = history.dbz.logminer [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || transforms = dropTombstone [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || database.tablename.case.insensitive = true [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || transforms.dropTombstone.type = io.debezium.transforms.ExtractNewRecordState [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || transforms.dropTombstone.add.fields = op,source.ts_ms [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || log.mining.strategy = online_catalog [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || include.schema.changes = false [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || transforms.dropTombstone.drop.tombstones = true [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || database.schema = wangbing [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || database.oracle.version = 11 [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || transforms.dropTombstone.delete.handling.mode = rewrite [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || database.user = wangbing [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || database.dbname = XE [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || database.connection.adapter = logminer [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || database.history.kafka.bootstrap.servers = dn90221:9092 [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || database.server.name = dbz.logminer [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || database.port = 1523 [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || task.class = io.debezium.connector.oracle.OracleConnectorTask [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || database.hostname = 192.168.90.45 [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || database.password = ******** [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || name = dbz.logminer.source [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || table.include.list = wangbing.test [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,581 INFO || snapshot.mode = schema_only [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:06,582 WARN || Using configuration property "database.whitelist" is deprecated and will be removed in future versions. Please use "database.include.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:35:06,582 WARN || Using configuration property "database.blacklist" is deprecated and will be removed in future versions. Please use "database.exclude.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:35:06,582 WARN || Using configuration property "schema.whitelist" is deprecated and will be removed in future versions. Please use "schema.include.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:35:06,582 WARN || Using configuration property "schema.blacklist" is deprecated and will be removed in future versions. Please use "schema.exclude.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:35:06,582 WARN || Using configuration property "table.blacklist" is deprecated and will be removed in future versions. Please use "table.exclude.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:35:06,582 WARN || Using configuration property "database.whitelist" is deprecated and will be removed in future versions. Please use "database.include.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:35:06,582 WARN || Using configuration property "database.blacklist" is deprecated and will be removed in future versions. Please use "database.exclude.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:35:06,582 WARN || Using configuration property "column.blacklist" is deprecated and will be removed in future versions. Please use "column.exclude.list" instead. [io.debezium.config.Configuration] 2021-02-24 10:35:06,584 INFO || [Producer clientId=connector-producer-dbz.logminer.source-0] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:35:06,585 INFO || KafkaDatabaseHistory Consumer config: {enable.auto.commit=false, value.deserializer=org.apache.kafka.common.serialization.StringDeserializer, group.id=dbz.logminer-dbhistory, auto.offset.reset=earliest, session.timeout.ms=10000, bootstrap.servers=dn90221:9092, client.id=dbz.logminer-dbhistory, key.deserializer=org.apache.kafka.common.serialization.StringDeserializer, fetch.min.bytes=1} [io.debezium.relational.history.KafkaDatabaseHistory] 2021-02-24 10:35:06,585 INFO || KafkaDatabaseHistory Producer config: {bootstrap.servers=dn90221:9092, value.serializer=org.apache.kafka.common.serialization.StringSerializer, buffer.memory=1048576, retries=1, key.serializer=org.apache.kafka.common.serialization.StringSerializer, client.id=dbz.logminer-dbhistory, linger.ms=0, batch.size=32768, max.block.ms=10000, acks=1} [io.debezium.relational.history.KafkaDatabaseHistory] 2021-02-24 10:35:06,585 INFO || Requested thread factory for connector OracleConnector, id = dbz.logminer named = db-history-config-check [io.debezium.util.Threads] 2021-02-24 10:35:06,586 INFO || ProducerConfig values: acks = 1 batch.size = 32768 bootstrap.servers = [dn90221:9092] buffer.memory = 1048576 client.dns.lookup = default client.id = dbz.logminer-dbhistory compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 10000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 1 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2021-02-24 10:35:06,589 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:06,589 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:06,590 INFO || Kafka startTimeMs: 1614134106589 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:06,590 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [dn90221:9092] check.crcs = true client.dns.lookup = default client.id = dbz.logminer-dbhistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = dbz.logminer-dbhistory group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:35:06,592 INFO || [Producer clientId=dbz.logminer-dbhistory] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:35:06,594 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:06,594 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:06,594 INFO || Kafka startTimeMs: 1614134106594 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:06,596 INFO || [Consumer clientId=dbz.logminer-dbhistory, groupId=dbz.logminer-dbhistory] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:35:07,017 INFO || Found previous offset OracleOffsetContext [scn=13951182] [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:35:07,018 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [dn90221:9092] check.crcs = true client.dns.lookup = default client.id = dbz.logminer-dbhistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = dbz.logminer-dbhistory group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:35:07,023 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:07,023 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:07,023 INFO || Kafka startTimeMs: 1614134107023 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:07,026 INFO || [Consumer clientId=dbz.logminer-dbhistory, groupId=dbz.logminer-dbhistory] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:35:07,030 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [dn90221:9092] check.crcs = true client.dns.lookup = default client.id = dbz.logminer-dbhistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = dbz.logminer-dbhistory group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:35:07,032 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:07,032 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:07,032 INFO || Kafka startTimeMs: 1614134107032 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:07,032 INFO || Creating thread debezium-oracleconnector-dbz.logminer-db-history-config-check [io.debezium.util.Threads] 2021-02-24 10:35:07,034 INFO || AdminClientConfig values: bootstrap.servers = [dn90221:9092] client.dns.lookup = default client.id = dbz.logminer-dbhistory-topic-check connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 1 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:35:07,036 WARN || The configuration 'value.serializer' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:35:07,036 WARN || The configuration 'batch.size' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:35:07,036 WARN || The configuration 'max.block.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:35:07,036 WARN || The configuration 'acks' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:35:07,036 WARN || The configuration 'buffer.memory' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:35:07,036 WARN || The configuration 'key.serializer' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:35:07,036 WARN || The configuration 'linger.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] 2021-02-24 10:35:07,036 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:07,036 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:07,036 INFO || Kafka startTimeMs: 1614134107036 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:07,036 INFO || [Consumer clientId=dbz.logminer-dbhistory, groupId=dbz.logminer-dbhistory] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:35:07,090 INFO || Database history topic 'history.dbz.logminer' has correct settings [io.debezium.relational.history.KafkaDatabaseHistory] 2021-02-24 10:35:07,092 INFO || Started database history recovery [io.debezium.relational.history.DatabaseHistoryMetrics] 2021-02-24 10:35:07,093 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [dn90221:9092] check.crcs = true client.dns.lookup = default client.id = dbz.logminer-dbhistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = dbz.logminer-dbhistory group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2021-02-24 10:35:07,095 INFO || Kafka version: 5.5.1-ccs [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:07,095 INFO || Kafka commitId: 5b2445123128cfaf [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:07,095 INFO || Kafka startTimeMs: 1614134107095 [org.apache.kafka.common.utils.AppInfoParser] 2021-02-24 10:35:07,096 INFO || [Consumer clientId=dbz.logminer-dbhistory, groupId=dbz.logminer-dbhistory] Subscribed to topic(s): history.dbz.logminer [org.apache.kafka.clients.consumer.KafkaConsumer] 2021-02-24 10:35:07,098 INFO || [Consumer clientId=dbz.logminer-dbhistory, groupId=dbz.logminer-dbhistory] Cluster ID: 6gcmr5X3Qm-0UWEKTUtJrg [org.apache.kafka.clients.Metadata] 2021-02-24 10:35:07,102 INFO || [Consumer clientId=dbz.logminer-dbhistory, groupId=dbz.logminer-dbhistory] Discovered group coordinator 192.168.90.221:9092 (id: 2147483647 rack: null) [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:35:07,102 INFO || [Consumer clientId=dbz.logminer-dbhistory, groupId=dbz.logminer-dbhistory] (Re-)joining group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:35:07,106 INFO || [Consumer clientId=dbz.logminer-dbhistory, groupId=dbz.logminer-dbhistory] Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:35:07,106 INFO || [Consumer clientId=dbz.logminer-dbhistory, groupId=dbz.logminer-dbhistory] (Re-)joining group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:35:07,110 INFO || [Consumer clientId=dbz.logminer-dbhistory, groupId=dbz.logminer-dbhistory] Finished assignment for group at generation 1: {dbz.logminer-dbhistory-e564b018-ccf5-4e61-b23e-837f770c8b54=Assignment(partitions=[history.dbz.logminer-0])} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2021-02-24 10:35:07,112 INFO || [Consumer clientId=dbz.logminer-dbhistory, groupId=dbz.logminer-dbhistory] Successfully joined group with generation 1 [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:35:07,114 INFO || [Consumer clientId=dbz.logminer-dbhistory, groupId=dbz.logminer-dbhistory] Adding newly assigned partitions: history.dbz.logminer-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2021-02-24 10:35:07,117 INFO || [Consumer clientId=dbz.logminer-dbhistory, groupId=dbz.logminer-dbhistory] Found no committed offset for partition history.dbz.logminer-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2021-02-24 10:35:07,119 INFO || [Consumer clientId=dbz.logminer-dbhistory, groupId=dbz.logminer-dbhistory] Resetting offset for partition history.dbz.logminer-0 to offset 0. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2021-02-24 10:35:07,128 INFO || [Consumer clientId=dbz.logminer-dbhistory, groupId=dbz.logminer-dbhistory] Revoke previously assigned partitions history.dbz.logminer-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2021-02-24 10:35:07,128 INFO || [Consumer clientId=dbz.logminer-dbhistory, groupId=dbz.logminer-dbhistory] Member dbz.logminer-dbhistory-e564b018-ccf5-4e61-b23e-837f770c8b54 sending LeaveGroup request to coordinator 192.168.90.221:9092 (id: 2147483647 rack: null) due to the consumer is being closed [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:35:07,131 INFO || Finished database history recovery of 1 change(s) in 39 ms [io.debezium.relational.history.DatabaseHistoryMetrics] 2021-02-24 10:35:07,132 DEBUG || Building schema for column ID of type 3 named NUMBER with constraints (10,Optional[0]) [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:35:07,132 INFO || JdbcValueConverters returned 'org.apache.kafka.connect.data.SchemaBuilder' for column 'ID' [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:35:07,132 DEBUG || Building schema for column NAME of type 12 named VARCHAR2 with constraints (255,Optional.empty) [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:35:07,132 INFO || JdbcValueConverters returned 'org.apache.kafka.connect.data.SchemaBuilder' for column 'NAME' [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:35:07,132 INFO || Requested thread factory for connector OracleConnector, id = dbz.logminer named = change-event-source-coordinator [io.debezium.util.Threads] 2021-02-24 10:35:07,132 INFO || Creating thread debezium-oracleconnector-dbz.logminer-change-event-source-coordinator [io.debezium.util.Threads] 2021-02-24 10:35:07,132 INFO || WorkerSourceTask{id=dbz.logminer.source-0} Source task finished initialization and start [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:35:07,132 INFO || Metrics registered [io.debezium.pipeline.ChangeEventSourceCoordinator] 2021-02-24 10:35:07,133 INFO || Context created [io.debezium.pipeline.ChangeEventSourceCoordinator] 2021-02-24 10:35:07,133 INFO || Snapshot ended with SnapshotResult [status=SKIPPED, offset=OracleOffsetContext [scn=13951182]] [io.debezium.pipeline.ChangeEventSourceCoordinator] 2021-02-24 10:35:07,133 INFO || Connected metrics set to 'true' [io.debezium.pipeline.metrics.StreamingChangeEventSourceMetrics] 2021-02-24 10:35:07,133 INFO || Starting streaming [io.debezium.pipeline.ChangeEventSourceCoordinator] 2021-02-24 10:35:07,133 INFO || Requested thread factory for connector OracleConnector, id = dbz.logminer named = transactional-buffer [io.debezium.util.Threads] 2021-02-24 10:35:07,133 INFO || Logminer metrics initialized LogMinerMetrics{currentScn=-1, logMinerQueryCount=0, totalCapturedDmlCount=0, totalDurationOfFetchingQuery=PT0S, lastCapturedDmlCount=0, lastDurationOfFetchingQuery=PT0S, maxCapturedDmlCount=0, maxDurationOfFetchingQuery=PT0S, totalBatchProcessingDuration=PT0S, lastBatchProcessingDuration=PT0S, maxBatchProcessingDuration=PT0S, maxBatchProcessingThroughput=0, currentLogFileName=null, redoLogStatus=null, switchCounter=0, batchSize=20000, millisecondToSleepBetweenMiningQuery=1000, recordMiningHistory=false, hoursToKeepTransaction=4, networkConnectionProblemsCounter0, batchSizeDefault=20000, batchSizeMin=1000, batchSizeMax=100000, sleepTimeDefault=1000, sleepTimeMin=0, sleepTimeMax=3000, sleepTimeIncrement=200} [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:07,167 TRACE || Current time 1614134107167 ms, database difference -5 ms [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:07,171 TRACE || Getting first scn of all online logs [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:07,176 TRACE || First SCN in online logs is 375676 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:07,178 TRACE || Getting online redo logs for offset scn 13951182 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:07,179 TRACE || Online redo log /u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log with SCN range 13914724 to 281474976710655 to be added. [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:07,179 TRACE || Online redo log /u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_2_g6c5nj25_.log with SCN range 13905074 to 13914724 to be excluded. [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:07,186 TRACE || Adding log file /u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log to mining session [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:07,187 DEBUG || Last mined SCN: 13951182, Log file list to mine: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:07,188 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:07,196 DEBUG || Updating sleep time window. Sleep time 1200. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:07,197 TRACE || Updating LOG_MINING_FLUSH with SCN 13951247 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:08,414 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:08,424 TRACE || Starting log mining startScn=13951182, endScn=13951246, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:08,490 TRACE || scn=13951182, operationCode=1, operation=INSERT, table=test, segOwner=WANGBING, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:08,490 TRACE || DML, transactionId=08001100EE240000, SCN=13951182, table_name=test, segOwner=WANGBING, operationCode=1, offsetSCN=13951182, commitOffsetSCN=13951183, sql insert into "WANGBING"."test"("ID","NAME") values ('1','bing'); [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:08,491 TRACE || Building schema for column ID of type 3 named NUMBER with constraints (10,Optional[0]) [io.debezium.connector.oracle.logminer.OracleChangeRecordValueConverter] 2021-02-24 10:35:08,491 TRACE || Value from data object: *** 1 *** [io.debezium.connector.oracle.logminer.OracleChangeRecordValueConverter] 2021-02-24 10:35:08,491 TRACE || Callback is: io.debezium.jdbc.JdbcValueConverters$$Lambda$764/109238785@883e946 [io.debezium.connector.oracle.logminer.OracleChangeRecordValueConverter] 2021-02-24 10:35:08,491 TRACE || Value from ResultReceiver: [received = true, object = 1] [io.debezium.connector.oracle.logminer.OracleChangeRecordValueConverter] 2021-02-24 10:35:08,491 TRACE || Building schema for column NAME of type 12 named VARCHAR2 with constraints (255,Optional.empty) [io.debezium.connector.oracle.logminer.OracleChangeRecordValueConverter] 2021-02-24 10:35:08,491 TRACE || Value from data object: *** bing *** [io.debezium.connector.oracle.logminer.OracleChangeRecordValueConverter] 2021-02-24 10:35:08,491 TRACE || Callback is: io.debezium.jdbc.JdbcValueConverters$$Lambda$766/1197864560@62af0075 [io.debezium.connector.oracle.logminer.OracleChangeRecordValueConverter] 2021-02-24 10:35:08,491 TRACE || Value from ResultReceiver: [received = true, object = bing] [io.debezium.connector.oracle.logminer.OracleChangeRecordValueConverter] 2021-02-24 10:35:08,491 TRACE || scn=13951183, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:08,491 TRACE || COMMIT, transactionId=08001100EE240000, SCN=13951183, table_name=null, segOwner=null, operationCode=7, offsetSCN=13951182, commitOffsetSCN=13951183, smallest SCN: null, largest SCN 13951182 [io.debezium.connector.oracle.logminer.TransactionalBuffer] 2021-02-24 10:35:08,491 INFO || Creating thread debezium-oracleconnector-dbz.logminer-transactional-buffer-0 [io.debezium.util.Threads] 2021-02-24 10:35:08,492 TRACE || COMMIT, transactionId=08001100EE240000, SCN=13951183, table_name=null, segOwner=null, operationCode=7, offsetSCN=13951182, commitOffsetSCN=13951183 [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:08,492 TRACE || scn=13951188, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:08,492 TRACE || Processing DML event io.debezium.connector.oracle.logminer.valueholder.LogMinerDmlEntryImpl@7a3cf2ec scn 13951182 [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:08,492 TRACE || scn=13951193, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:08,493 TRACE || Value from data object: *** 1 *** [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:35:08,493 TRACE || Callback is: io.debezium.jdbc.JdbcValueConverters$$Lambda$764/109238785@14634751 [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:35:08,493 TRACE || Value from ResultReceiver: [received = true, object = 1] [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:35:08,493 TRACE || scn=13951198, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:08,493 TRACE || Value from data object: *** bing *** [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:35:08,493 TRACE || Callback is: io.debezium.jdbc.JdbcValueConverters$$Lambda$766/1197864560@1aa4bbad [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:35:08,493 TRACE || Value from ResultReceiver: [received = true, object = bing] [io.debezium.connector.oracle.OracleValueConverters] 2021-02-24 10:35:08,493 TRACE || scn=13951203, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:08,493 TRACE || scn=13951208, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:08,494 TRACE || scn=13951213, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:08,494 TRACE || scn=13951218, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:08,494 TRACE || scn=13951223, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:08,494 TRACE || scn=13951236, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=SYS [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:08,495 TRACE || scn=13951238, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=SYS [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:08,495 TRACE || scn=13951239, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=SYS [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:08,495 DEBUG || 1 DMLs, 1 Commits, 0 Rollbacks, 1 Inserts, 0 Updates, 0 Deletes. Processed in 5 millis. Lag:56496. Offset scn:13951182. Offset commit scn:13951183. Active transactions:0. Sleep time:1200 [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:08,495 TRACE || Resetting largest SCN in transaction buffer to 13951246, nextStartScn=13951182, startScn=13951182 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:08,495 DEBUG || Transactional buffer empty, updating offset's SCN 13951246 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:08,496 DEBUG || Updating sleep time window. Sleep time 1400. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:08,497 TRACE || Updating LOG_MINING_FLUSH with SCN 13951252 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:09,928 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:09,943 TRACE || Starting log mining startScn=13951246, endScn=13951251, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:09,995 TRACE || scn=13951249, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:09,996 DEBUG || Transactional buffer empty, updating offset's SCN 13951251 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:09,997 DEBUG || Updating sleep time window. Sleep time 1600. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:09,997 TRACE || Updating LOG_MINING_FLUSH with SCN 13951257 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:11,623 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:11,633 TRACE || Starting log mining startScn=13951251, endScn=13951256, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:11,687 TRACE || scn=13951254, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:11,687 DEBUG || Transactional buffer empty, updating offset's SCN 13951256 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:11,688 DEBUG || Updating sleep time window. Sleep time 1800. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:11,689 TRACE || Updating LOG_MINING_FLUSH with SCN 13951262 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:13,513 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:13,524 TRACE || Starting log mining startScn=13951256, endScn=13951261, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:13,581 TRACE || scn=13951259, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:13,581 DEBUG || Transactional buffer empty, updating offset's SCN 13951261 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:13,582 DEBUG || Updating sleep time window. Sleep time 2000. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:13,583 TRACE || Updating LOG_MINING_FLUSH with SCN 13951267 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:15,616 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:15,627 TRACE || Starting log mining startScn=13951261, endScn=13951266, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:15,679 TRACE || scn=13951264, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:15,679 DEBUG || Transactional buffer empty, updating offset's SCN 13951266 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:15,680 DEBUG || Updating sleep time window. Sleep time 2200. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:15,681 TRACE || Updating LOG_MINING_FLUSH with SCN 13951272 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:16,579 INFO || WorkerSourceTask{id=dbz.logminer.source-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:35:16,580 INFO || WorkerSourceTask{id=dbz.logminer.source-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:35:16,582 INFO || WorkerSourceTask{id=dbz.logminer.source-0} Finished commitOffsets successfully in 3 ms [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:35:17,908 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:17,920 TRACE || Starting log mining startScn=13951266, endScn=13951271, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:17,979 TRACE || scn=13951269, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:17,979 DEBUG || Transactional buffer empty, updating offset's SCN 13951271 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:17,980 DEBUG || Updating sleep time window. Sleep time 2400. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:17,981 TRACE || Updating LOG_MINING_FLUSH with SCN 13951277 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:20,408 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:20,420 TRACE || Starting log mining startScn=13951271, endScn=13951276, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:20,474 TRACE || scn=13951274, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:20,474 DEBUG || Transactional buffer empty, updating offset's SCN 13951276 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:20,475 DEBUG || Updating sleep time window. Sleep time 2600. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:20,476 TRACE || Updating LOG_MINING_FLUSH with SCN 13951282 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:23,101 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:23,111 TRACE || Starting log mining startScn=13951276, endScn=13951281, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:23,163 TRACE || scn=13951279, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:23,164 DEBUG || Transactional buffer empty, updating offset's SCN 13951281 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:23,165 DEBUG || Updating sleep time window. Sleep time 2800. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:23,165 TRACE || Updating LOG_MINING_FLUSH with SCN 13951287 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:25,992 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:26,003 TRACE || Starting log mining startScn=13951281, endScn=13951286, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:26,055 TRACE || scn=13951284, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:26,055 DEBUG || Transactional buffer empty, updating offset's SCN 13951286 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:26,056 DEBUG || Updating sleep time window. Sleep time 3000. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:26,057 TRACE || Updating LOG_MINING_FLUSH with SCN 13951292 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:26,583 INFO || WorkerSourceTask{id=dbz.logminer.source-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:35:26,583 INFO || WorkerSourceTask{id=dbz.logminer.source-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:35:29,085 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:29,096 TRACE || Starting log mining startScn=13951286, endScn=13951291, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:29,148 TRACE || scn=13951289, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:29,148 DEBUG || Transactional buffer empty, updating offset's SCN 13951291 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:29,149 DEBUG || Updating sleep time window. Sleep time 2800. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:29,150 TRACE || Updating LOG_MINING_FLUSH with SCN 13951297 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:31,992 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:32,003 TRACE || Starting log mining startScn=13951291, endScn=13951296, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:32,056 TRACE || scn=13951294, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:32,056 DEBUG || Transactional buffer empty, updating offset's SCN 13951296 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:32,057 DEBUG || Updating sleep time window. Sleep time 3000. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:32,058 TRACE || Updating LOG_MINING_FLUSH with SCN 13951302 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:35,082 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:35,094 TRACE || Starting log mining startScn=13951296, endScn=13951301, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:35,152 TRACE || scn=13951299, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:35,152 DEBUG || Transactional buffer empty, updating offset's SCN 13951301 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:35,153 DEBUG || Updating sleep time window. Sleep time 2800. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:35,153 TRACE || Updating LOG_MINING_FLUSH with SCN 13951307 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:36,583 INFO || WorkerSourceTask{id=dbz.logminer.source-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:35:36,583 INFO || WorkerSourceTask{id=dbz.logminer.source-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:35:37,977 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:37,986 TRACE || Starting log mining startScn=13951301, endScn=13951306, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:38,038 TRACE || scn=13951304, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:38,038 DEBUG || Transactional buffer empty, updating offset's SCN 13951306 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:38,039 DEBUG || Updating sleep time window. Sleep time 3000. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:38,040 TRACE || Updating LOG_MINING_FLUSH with SCN 13951312 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:41,077 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:41,086 TRACE || Starting log mining startScn=13951306, endScn=13951311, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:41,139 TRACE || scn=13951309, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:41,139 DEBUG || Transactional buffer empty, updating offset's SCN 13951311 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:41,140 DEBUG || Updating sleep time window. Sleep time 2800. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:41,141 TRACE || Updating LOG_MINING_FLUSH with SCN 13951317 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:43,969 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:43,981 TRACE || Starting log mining startScn=13951311, endScn=13951316, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:44,036 TRACE || scn=13951314, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:44,036 DEBUG || Transactional buffer empty, updating offset's SCN 13951316 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:44,037 DEBUG || Updating sleep time window. Sleep time 3000. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:44,038 TRACE || Updating LOG_MINING_FLUSH with SCN 13951322 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:46,584 INFO || WorkerSourceTask{id=dbz.logminer.source-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:35:46,584 INFO || WorkerSourceTask{id=dbz.logminer.source-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:35:47,063 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:47,071 TRACE || Starting log mining startScn=13951316, endScn=13951321, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:47,128 TRACE || scn=13951319, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:47,129 DEBUG || Transactional buffer empty, updating offset's SCN 13951321 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:47,130 DEBUG || Updating sleep time window. Sleep time 2800. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:47,130 TRACE || Updating LOG_MINING_FLUSH with SCN 13951327 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:49,947 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:49,960 TRACE || Starting log mining startScn=13951321, endScn=13951326, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:50,017 TRACE || scn=13951324, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:50,018 DEBUG || Transactional buffer empty, updating offset's SCN 13951326 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:50,019 DEBUG || Updating sleep time window. Sleep time 3000. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:50,019 TRACE || Updating LOG_MINING_FLUSH with SCN 13951332 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:53,053 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:53,064 TRACE || Starting log mining startScn=13951326, endScn=13951331, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:53,118 TRACE || scn=13951329, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:53,118 DEBUG || Transactional buffer empty, updating offset's SCN 13951331 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:53,119 DEBUG || Updating sleep time window. Sleep time 2800. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:53,120 TRACE || Updating LOG_MINING_FLUSH with SCN 13951337 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:55,951 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:55,963 TRACE || Starting log mining startScn=13951331, endScn=13951336, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:56,019 TRACE || scn=13951334, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:56,019 DEBUG || Transactional buffer empty, updating offset's SCN 13951336 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:56,020 DEBUG || Updating sleep time window. Sleep time 3000. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:56,021 TRACE || Updating LOG_MINING_FLUSH with SCN 13951342 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:56,584 INFO || WorkerSourceTask{id=dbz.logminer.source-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:35:56,584 INFO || WorkerSourceTask{id=dbz.logminer.source-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:35:59,044 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:59,056 TRACE || Starting log mining startScn=13951336, endScn=13951341, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:35:59,109 TRACE || scn=13951339, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:35:59,109 DEBUG || Transactional buffer empty, updating offset's SCN 13951341 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:35:59,110 DEBUG || Updating sleep time window. Sleep time 2800. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:35:59,110 TRACE || Updating LOG_MINING_FLUSH with SCN 13951347 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:36:01,936 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:36:01,946 TRACE || Starting log mining startScn=13951341, endScn=13951346, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:36:02,000 TRACE || scn=13951344, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:36:02,000 DEBUG || Transactional buffer empty, updating offset's SCN 13951346 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:36:02,001 DEBUG || Updating sleep time window. Sleep time 3000. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:36:02,001 TRACE || Updating LOG_MINING_FLUSH with SCN 13951352 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:36:05,021 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:36:05,032 TRACE || Starting log mining startScn=13951346, endScn=13951351, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:36:05,084 TRACE || scn=13951349, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:36:05,084 DEBUG || Transactional buffer empty, updating offset's SCN 13951351 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:36:05,085 DEBUG || Updating sleep time window. Sleep time 2800. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:36:05,086 TRACE || Updating LOG_MINING_FLUSH with SCN 13951362 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:36:06,584 INFO || WorkerSourceTask{id=dbz.logminer.source-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:36:06,585 INFO || WorkerSourceTask{id=dbz.logminer.source-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:36:07,918 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:36:07,931 TRACE || Starting log mining startScn=13951351, endScn=13951361, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:36:07,987 TRACE || scn=13951354, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:36:07,987 TRACE || scn=13951356, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=UNKNOWN [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:36:07,987 TRACE || scn=13951359, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=UNKNOWN [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:36:07,987 DEBUG || Transactional buffer empty, updating offset's SCN 13951361 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:36:07,988 DEBUG || Updating sleep time window. Sleep time 3000. Min sleep time 0. Max sleep time 3000. [io.debezium.connector.oracle.logminer.LogMinerMetrics] 2021-02-24 10:36:07,989 TRACE || Updating LOG_MINING_FLUSH with SCN 13951367 [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:36:08,450 INFO || Successfully processed removal of connector 'dbz.logminer.source' [org.apache.kafka.connect.storage.KafkaConfigBackingStore] 2021-02-24 10:36:08,451 INFO || [Worker clientId=connect-1, groupId=plaintext123] Connector dbz.logminer.source config removed [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:36:08,953 INFO || [Worker clientId=connect-1, groupId=plaintext123] Handling connector-only config update by stopping connector dbz.logminer.source [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:36:08,953 INFO || Stopping connector dbz.logminer.source [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:36:08,954 INFO || Stopped connector dbz.logminer.source [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:36:08,954 INFO || [Worker clientId=connect-1, groupId=plaintext123] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2021-02-24 10:36:08,954 INFO || [Worker clientId=connect-1, groupId=plaintext123] (Re-)joining group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:36:08,959 INFO || [Worker clientId=connect-1, groupId=plaintext123] Successfully joined group with generation 8 [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:36:08,959 INFO || Stopping connector dbz.logminer.source [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:36:08,959 WARN || Ignoring stop request for unowned connector dbz.logminer.source [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:36:08,959 INFO || Stopping task dbz.logminer.source-0 [org.apache.kafka.connect.runtime.Worker] 2021-02-24 10:36:08,960 INFO || Stopping down connector [io.debezium.connector.common.BaseSourceTask] 2021-02-24 10:36:09,632 INFO || WorkerSourceTask{id=dbz.logminer.source-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:36:11,024 TRACE || Current Redo log fileNames: [/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log] [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:36:11,035 TRACE || Starting log mining startScn=13951361, endScn=13951366, strategy=ONLINE_CATALOG, continuous=false [io.debezium.connector.oracle.logminer.LogMinerHelper] 2021-02-24 10:36:11,092 TRACE || scn=13951364, operationCode=7, operation=COMMIT, table=null, segOwner=null, userName=WANGBING [io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor] 2021-02-24 10:36:11,092 DEBUG || Transactional buffer empty, updating offset's SCN 13951366 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:36:11,093 INFO || startScn=13951366, endScn=13951366, offsetContext.getScn()=13951366 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:36:11,093 INFO || Transactional buffer metrics dump: TransactionalBufferMetrics{oldestScn=13951182, committedScn=13951183, offsetScn=13951182, lagFromTheSource=PT56.496S, activeTransactions=0, rolledBackTransactions=0, committedTransactions=1, lastCommitDuration=3, maxCommitDuration=3, registeredDmlCounter=1, committedDmlCounter=1, maxLagFromTheSource=PT56.496S, minLagFromTheSource=PT0S, abandonedTransactionIds=[], errorCounter=0, warningCounter=0, scnFreezeCounter=0, commitQueueCapacity=8192} [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:36:11,093 INFO || Transactional buffer dump: [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:36:11,093 INFO || LogMiner metrics dump: LogMinerMetrics{currentScn=13951366, logMinerQueryCount=24, totalCapturedDmlCount=1, totalDurationOfFetchingQuery=PT1.227275863S, lastCapturedDmlCount=1, lastDurationOfFetchingQuery=PT0.052852795S, maxCapturedDmlCount=1, maxDurationOfFetchingQuery=PT0.060873287S, totalBatchProcessingDuration=PT0.005S, lastBatchProcessingDuration=PT0.005S, maxBatchProcessingDuration=PT0.005S, maxBatchProcessingThroughput=200, currentLogFileName=[Ljava.lang.String;@74c37a9a, redoLogStatus=[Ljava.lang.String;@631d649d, switchCounter=0, batchSize=20000, millisecondToSleepBetweenMiningQuery=3000, recordMiningHistory=false, hoursToKeepTransaction=4, networkConnectionProblemsCounter0, batchSizeDefault=20000, batchSizeMin=1000, batchSizeMax=100000, sleepTimeDefault=1000, sleepTimeMin=0, sleepTimeMax=3000, sleepTimeIncrement=200} [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2021-02-24 10:36:11,094 INFO || Finished streaming [io.debezium.pipeline.ChangeEventSourceCoordinator] 2021-02-24 10:36:11,094 INFO || Connected metrics set to 'false' [io.debezium.pipeline.metrics.StreamingChangeEventSourceMetrics] 2021-02-24 10:36:11,095 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] 2021-02-24 10:36:11,095 INFO || [Producer clientId=dbz.logminer-dbhistory] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer] 2021-02-24 10:36:11,097 INFO || WorkerSourceTask{id=dbz.logminer.source-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask] 2021-02-24 10:36:11,098 INFO || [Producer clientId=connector-producer-dbz.logminer.source-0] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer] 2021-02-24 10:36:11,100 INFO || [Worker clientId=connect-1, groupId=plaintext123] Finished stopping tasks in preparation for rebalance [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:36:11,105 INFO || [Worker clientId=connect-1, groupId=plaintext123] Finished flushing status backing store in preparation for rebalance [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:36:11,105 INFO || [Worker clientId=connect-1, groupId=plaintext123] Joined group at generation 8 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-0e6e4d29-5e91-4fc6-b8fa-0e06d1a2ae15', leaderUrl='http://192.168.90.221:8083/', offset=11, connectorIds=[], taskIds=[], revokedConnectorIds=[dbz.logminer.source], revokedTaskIds=[dbz.logminer.source-0], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:36:11,105 INFO || [Worker clientId=connect-1, groupId=plaintext123] Starting connectors and tasks using config offset 11 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:36:11,105 INFO || [Worker clientId=connect-1, groupId=plaintext123] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:36:11,105 INFO || [Worker clientId=connect-1, groupId=plaintext123] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2021-02-24 10:36:11,105 INFO || [Worker clientId=connect-1, groupId=plaintext123] (Re-)joining group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:36:11,109 INFO || [Worker clientId=connect-1, groupId=plaintext123] Successfully joined group with generation 9 [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] 2021-02-24 10:36:11,109 INFO || [Worker clientId=connect-1, groupId=plaintext123] Joined group at generation 9 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-0e6e4d29-5e91-4fc6-b8fa-0e06d1a2ae15', leaderUrl='http://192.168.90.221:8083/', offset=11, connectorIds=[], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:36:11,109 INFO || [Worker clientId=connect-1, groupId=plaintext123] Starting connectors and tasks using config offset 11 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2021-02-24 10:36:11,109 INFO || [Worker clientId=connect-1, groupId=plaintext123] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder]