Preparing certificates for internal communication Adding /etc/tls-sidecar/cluster-ca-certs/ca.crt to truststore /tmp/topic-operator/replication.truststore.p12 with alias ca Certificate was added to keystore Preparing certificates for internal communication is complete + shift + export MALLOC_ARENA_MAX=2 + MALLOC_ARENA_MAX=2 + JAVA_OPTS=' -Dlog4j2.configurationFile=file:/opt/topic-operator/custom-config/log4j2.properties -Dvertx.cacheDirBase=/tmp -Djava.security.egd=file:/dev/./urandom' ++ get_gc_opts ++ '[' false == true ']' ++ echo '' + JAVA_OPTS=' -Dlog4j2.configurationFile=file:/opt/topic-operator/custom-config/log4j2.properties -Dvertx.cacheDirBase=/tmp -Djava.security.egd=file:/dev/./urandom ' + exec /usr/bin/tini -w -e 143 -- java -Dlog4j2.configurationFile=file:/opt/topic-operator/custom-config/log4j2.properties -Dvertx.cacheDirBase=/tmp -Djava.security.egd=file:/dev/./urandom -classpath lib/io.strimzi.topic-operator-0.18.0-SNAPSHOT.jar:lib/io.prometheus.simpleclient_common-0.7.0.jar:lib/com.github.luben.zstd-jni-1.4.3-1.jar:lib/io.netty.netty-handler-4.1.45.Final.jar:lib/com.101tec.zkclient-0.11.jar:lib/io.netty.netty-codec-http-4.1.45.Final.jar:lib/io.strimzi.operator-common-0.18.0-SNAPSHOT.jar:lib/com.squareup.okio.okio-1.15.0.jar:lib/io.netty.netty-buffer-4.1.45.Final.jar:lib/org.yaml.snakeyaml-1.24.jar:lib/io.fabric8.openshift-client-4.6.4.jar:lib/io.strimzi.api-0.18.0-SNAPSHOT.jar:lib/io.netty.netty-common-4.1.45.Final.jar:lib/org.apache.logging.log4j.log4j-api-2.13.0.jar:lib/org.xerial.snappy.snappy-java-1.1.7.3.jar:lib/org.hdrhistogram.HdrHistogram-2.1.11.jar:lib/io.prometheus.simpleclient-0.7.0.jar:lib/org.apache.yetus.audience-annotations-0.5.0.jar:lib/com.fasterxml.jackson.dataformat.jackson-dataformat-yaml-2.10.2.jar:lib/io.strimzi.certificate-manager-0.18.0-SNAPSHOT.jar:lib/io.netty.netty-codec-4.1.45.Final.jar:lib/io.micrometer.micrometer-core-1.3.1.jar:lib/jakarta.activation.jakarta.activation-api-1.2.1.jar:lib/io.vertx.vertx-core-3.8.5.jar:lib/io.netty.netty-codec-dns-4.1.45.Final.jar:lib/io.fabric8.kubernetes-model-4.6.4.jar:lib/io.netty.netty-codec-socks-4.1.45.Final.jar:lib/com.github.mifmif.generex-1.0.2.jar:lib/org.apache.zookeeper.zookeeper-jute-3.5.6.jar:lib/io.netty.netty-resolver-4.1.45.Final.jar:lib/io.netty.netty-handler-proxy-4.1.45.Final.jar:lib/com.squareup.okhttp3.logging-interceptor-3.12.6.jar:lib/io.netty.netty-transport-native-unix-common-4.1.45.Final.jar:lib/org.apache.zookeeper.zookeeper-3.5.6.jar:lib/dk.brics.automaton.automaton-1.11-8.jar:lib/io.vertx.vertx-micrometer-metrics-3.8.5.jar:lib/com.fasterxml.jackson.core.jackson-core-2.10.2.jar:lib/io.netty.netty-transport-4.1.45.Final.jar:lib/io.netty.netty-transport-native-epoll-4.1.45.Final.jar:lib/jakarta.xml.bind.jakarta.xml.bind-api-2.3.2.jar:lib/org.apache.logging.log4j.log4j-slf4j-impl-2.13.0.jar:lib/com.fasterxml.jackson.core.jackson-annotations-2.10.2.jar:lib/io.fabric8.zjsonpatch-0.3.0.jar:lib/org.lz4.lz4-java-1.6.0.jar:lib/io.fabric8.kubernetes-client-4.6.4.jar:lib/com.fasterxml.jackson.module.jackson-module-jaxb-annotations-2.10.2.jar:lib/io.strimzi.crd-annotations-0.18.0-SNAPSHOT.jar:lib/com.squareup.okhttp3.okhttp-3.12.6.jar:lib/io.netty.netty-codec-http2-4.1.45.Final.jar:lib/org.apache.logging.log4j.log4j-core-2.13.0.jar:lib/io.fabric8.kubernetes-model-common-4.6.4.jar:lib/com.fasterxml.jackson.core.jackson-databind-2.10.2.jar:lib/io.netty.netty-resolver-dns-4.1.45.Final.jar:lib/org.slf4j.slf4j-api-1.7.25.jar:lib/org.latencyutils.LatencyUtils-2.0.3.jar:lib/org.apache.kafka.kafka-clients-2.4.0.jar:lib/io.micrometer.micrometer-registry-prometheus-1.3.1.jar io.strimzi.operator.topic.Main [2020-04-15 00:25:05,940] INFO
[main ] TopicOperator 0.18.0-SNAPSHOT is starting [2020-04-15 00:25:05,976] DEBUG [main ] Trying to configure client from Kubernetes config... [2020-04-15 00:25:05,976] DEBUG [main ] Did not find Kubernetes config at: [/.kube/config]. Ignoring. [2020-04-15 00:25:05,977] DEBUG [main ] Trying to configure client from service account... [2020-04-15 00:25:05,977] DEBUG [main ] Found service account host and port: 172.30.0.1:443 [2020-04-15 00:25:05,978] DEBUG [main ] Found service account ca cert at: [/var/run/secrets/kubernetes.io/serviceaccount/ca.crt]. [2020-04-15 00:25:05,980] DEBUG [main ] Found service account token at: [/var/run/secrets/kubernetes.io/serviceaccount/token]. [2020-04-15 00:25:05,980] DEBUG [main ] Trying to configure client namespace from Kubernetes service account namespace path... [2020-04-15 00:25:05,981] DEBUG [main ] Found service account namespace at: [/var/run/secrets/kubernetes.io/serviceaccount/namespace]. [2020-04-15 00:25:06,301] DEBUG [main ] Using SLF4J as the default logging framework [2020-04-15 00:25:06,305] DEBUG [main ] -Dio.netty.leakDetection.level: simple [2020-04-15 00:25:06,305] DEBUG [main ] -Dio.netty.leakDetection.targetRecords: 4 [2020-04-15 00:25:06,319] DEBUG [main ] -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024 [2020-04-15 00:25:06,320] DEBUG [main ] -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096 [2020-04-15 00:25:06,329] DEBUG [main ] -Dio.netty.eventLoopThreads: 2 [2020-04-15 00:25:06,357] DEBUG [main ] -Dio.netty.noKeySetOptimization: false [2020-04-15 00:25:06,358] DEBUG [main ] -Dio.netty.selectorAutoRebuildThreshold: 512 [2020-04-15 00:25:06,374] DEBUG [main ] -Dio.netty.noUnsafe: false [2020-04-15 00:25:06,374] DEBUG [main ] Java version: 8 [2020-04-15 00:25:06,375] DEBUG [main ] sun.misc.Unsafe.theUnsafe: available [2020-04-15 00:25:06,375] DEBUG [main ] sun.misc.Unsafe.copyMemory: available [2020-04-15 00:25:06,376] DEBUG [main ] java.nio.Buffer.address: available [2020-04-15 00:25:06,376] DEBUG [main ] direct buffer constructor: available [2020-04-15 00:25:06,377] DEBUG [main ] java.nio.Bits.unaligned: available, true [2020-04-15 00:25:06,377] DEBUG [main ] jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable prior to Java9 [2020-04-15 00:25:06,377] DEBUG [main ] java.nio.DirectByteBuffer.(long, int): available [2020-04-15 00:25:06,378] DEBUG [main ] sun.misc.Unsafe: available [2020-04-15 00:25:06,378] DEBUG [main ] -Dio.netty.tmpdir: /tmp (java.io.tmpdir) [2020-04-15 00:25:06,378] DEBUG [main ] -Dio.netty.bitMode: 64 (sun.arch.data.model) [2020-04-15 00:25:06,379] DEBUG [main ] -Dio.netty.maxDirectMemory: 4026138624 bytes [2020-04-15 00:25:06,379] DEBUG [main ] -Dio.netty.uninitializedArrayAllocationThreshold: -1 [2020-04-15 00:25:06,381] DEBUG [main ] java.nio.ByteBuffer.cleaner(): available [2020-04-15 00:25:06,381] DEBUG [main ] -Dio.netty.noPreferDirect: false [2020-04-15 00:25:06,388] DEBUG [main ] org.jctools-core.MpscChunkedArrayQueue: available [2020-04-15 00:25:06,489] DEBUG [main ] Default DNS servers: [/172.30.0.2:53] (sun.net.dns.ResolverConfiguration) [2020-04-15 00:25:06,496] DEBUG [main ] -Djava.net.preferIPv4Stack: false [2020-04-15 00:25:06,496] DEBUG [main ] -Djava.net.preferIPv6Addresses: false [2020-04-15 00:25:06,497] DEBUG [main ] Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo) [2020-04-15 00:25:06,498] DEBUG [main ] /proc/sys/net/core/somaxconn: 128 [2020-04-15 00:25:06,520] INFO [main ] Using config: STRIMZI_TRUSTSTORE_LOCATION: /tmp/topic-operator/replication.truststore.p12 STRIMZI_RESOURCE_LABELS: strimzi.io/cluster=my-cluster-source STRIMZI_KAFKA_BOOTSTRAP_SERVERS: my-cluster-source-kafka-bootstrap:9091 STRIMZI_NAMESPACE: mirrormaker2-cluster-test STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS: 20000 STRIMZI_TOPICS_PATH: /strimzi/topics STRIMZI_FULL_RECONCILIATION_INTERVAL_MS: 90000 STRIMZI_ZOOKEEPER_CONNECT: localhost:2181 STRIMZI_TLS_ENABLED: true STRIMZI_KEYSTORE_PASSWORD: Gef7mh2hbfRaIvoBkaGjkFpwR7pHUM8v STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS: 6 STRIMZI_REASSIGN_VERIFY_INTERVAL_MS: 120000 STRIMZI_KEYSTORE_LOCATION: /tmp/topic-operator/replication.keystore.p12 TC_ZK_CONNECTION_TIMEOUT_MS: 20000 STRIMZI_TRUSTSTORE_PASSWORD: Gef7mh2hbfRaIvoBkaGjkFpwR7pHUM8v STRIMZI_REASSIGN_THROTTLE: 9223372036854775807 [2020-04-15 00:25:06,540] DEBUG [main ] Using SLF4J as the default logging framework [2020-04-15 00:25:06,583] INFO [oop-thread-0] Starting [2020-04-15 00:25:06,601] INFO [oop-thread-0] AdminClientConfig values: bootstrap.servers = [my-cluster-source-kafka-bootstrap:9091] client.dns.lookup = default client.id = connections.max.idle.ms = 300000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 120000 retries = 5 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = SSL security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = HTTPS ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = /tmp/topic-operator/replication.keystore.p12 ssl.keystore.password = [hidden] ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = /tmp/topic-operator/replication.truststore.p12 ssl.truststore.password = [hidden] ssl.truststore.type = JKS [2020-04-15 00:25:06,630] DEBUG [oop-thread-0] [AdminClient clientId=adminclient-1] Setting bootstrap cluster metadata Cluster(id = null, nodes = [my-cluster-source-kafka-bootstrap:9091 (id: -1 rack: null)], partitions = [], controller = null). [2020-04-15 00:25:06,777] DEBUG [oop-thread-0] Created SSL context with keystore SecurityStore(path=/tmp/topic-operator/replication.keystore.p12, modificationTime=Wed Apr 15 00:25:05 UTC 2020), truststore SecurityStore(path=/tmp/topic-operator/replication.truststore.p12, modificationTime=Wed Apr 15 00:25:05 UTC 2020), provider SunJSSE. [2020-04-15 00:25:06,783] DEBUG [oop-thread-0] Added sensor with name connections-closed: [2020-04-15 00:25:06,785] DEBUG [oop-thread-0] Added sensor with name connections-created: [2020-04-15 00:25:06,786] DEBUG [oop-thread-0] Added sensor with name successful-authentication: [2020-04-15 00:25:06,786] DEBUG [oop-thread-0] Added sensor with name successful-reauthentication: [2020-04-15 00:25:06,792] DEBUG [oop-thread-0] Added sensor with name successful-authentication-no-reauth: [2020-04-15 00:25:06,793] DEBUG [oop-thread-0] Added sensor with name failed-authentication: [2020-04-15 00:25:06,793] DEBUG [oop-thread-0] Added sensor with name failed-reauthentication: [2020-04-15 00:25:06,793] DEBUG [oop-thread-0] Added sensor with name reauthentication-latency: [2020-04-15 00:25:06,795] DEBUG [oop-thread-0] Added sensor with name bytes-sent-received: [2020-04-15 00:25:06,795] DEBUG [oop-thread-0] Added sensor with name bytes-sent: [2020-04-15 00:25:06,796] DEBUG [oop-thread-0] Added sensor with name bytes-received: [2020-04-15 00:25:06,797] DEBUG [oop-thread-0] Added sensor with name select-time: [2020-04-15 00:25:06,797] DEBUG [oop-thread-0] Added sensor with name io-time: [2020-04-15 00:25:06,810] INFO [oop-thread-0] Kafka version: 2.4.0 [2020-04-15 00:25:06,811] INFO [oop-thread-0] Kafka commitId: 77a89fcf8d7fa018 [2020-04-15 00:25:06,811] INFO [oop-thread-0] Kafka startTimeMs: 1586910306809 [2020-04-15 00:25:06,813] DEBUG [oop-thread-0] [AdminClient clientId=adminclient-1] Kafka admin client initialized [2020-04-15 00:25:06,813] DEBUG [oop-thread-0] Using AdminClient org.apache.kafka.clients.admin.KafkaAdminClient@5374812e [2020-04-15 00:25:06,815] DEBUG [oop-thread-0] Using Kafka io.strimzi.operator.topic.KafkaImpl@314c117f [2020-04-15 00:25:06,815] DEBUG [oop-thread-0] Using namespace mirrormaker2-cluster-test [2020-04-15 00:25:06,829] DEBUG [dminclient-1] [AdminClient clientId=adminclient-1] Initiating connection to node my-cluster-source-kafka-bootstrap:9091 (id: -1 rack: null) using address my-cluster-source-kafka-bootstrap/172.30.71.138 [2020-04-15 00:25:06,899] DEBUG [oop-thread-0] Using k8s io.strimzi.operator.topic.K8sImpl@50e2cbb7 [2020-04-15 00:25:06,912] DEBUG [dminclient-1] Added sensor with name node--1.bytes-sent [2020-04-15 00:25:06,913] DEBUG [dminclient-1] Added sensor with name node--1.bytes-received [2020-04-15 00:25:06,913] DEBUG [dminclient-1] Added sensor with name node--1.latency [2020-04-15 00:25:06,914] DEBUG [dminclient-1] [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 [2020-04-15 00:25:06,943] DEBUG [ker-thread-0] Creating new ZookKeeper instance to connect to localhost:2181. [2020-04-15 00:25:06,943] INFO [calhost:2181] Starting ZkClient event thread. [2020-04-15 00:25:06,950] INFO [ker-thread-0] Client environment:zookeeper.version=3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 10/08/2019 20:18 GMT [2020-04-15 00:25:06,950] INFO [ker-thread-0] Client environment:host.name=my-cluster-source-entity-operator-f8bd9586-nvshn [2020-04-15 00:25:06,950] INFO [ker-thread-0] Client environment:java.version=1.8.0_242 [2020-04-15 00:25:06,950] INFO [ker-thread-0] Client environment:java.vendor=Oracle Corporation [2020-04-15 00:25:06,951] INFO [ker-thread-0] Client environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.242.b08-0.el7_7.x86_64/jre [2020-04-15 00:25:06,951] INFO [ker-thread-0] Client environment:java.class.path=lib/io.strimzi.topic-operator-0.18.0-SNAPSHOT.jar:lib/io.prometheus.simpleclient_common-0.7.0.jar:lib/com.github.luben.zstd-jni-1.4.3-1.jar:lib/io.netty.netty-handler-4.1.45.Final.jar:lib/com.101tec.zkclient-0.11.jar:lib/io.netty.netty-codec-http-4.1.45.Final.jar:lib/io.strimzi.operator-common-0.18.0-SNAPSHOT.jar:lib/com.squareup.okio.okio-1.15.0.jar:lib/io.netty.netty-buffer-4.1.45.Final.jar:lib/org.yaml.snakeyaml-1.24.jar:lib/io.fabric8.openshift-client-4.6.4.jar:lib/io.strimzi.api-0.18.0-SNAPSHOT.jar:lib/io.netty.netty-common-4.1.45.Final.jar:lib/org.apache.logging.log4j.log4j-api-2.13.0.jar:lib/org.xerial.snappy.snappy-java-1.1.7.3.jar:lib/org.hdrhistogram.HdrHistogram-2.1.11.jar:lib/io.prometheus.simpleclient-0.7.0.jar:lib/org.apache.yetus.audience-annotations-0.5.0.jar:lib/com.fasterxml.jackson.dataformat.jackson-dataformat-yaml-2.10.2.jar:lib/io.strimzi.certificate-manager-0.18.0-SNAPSHOT.jar:lib/io.netty.netty-codec-4.1.45.Final.jar:lib/io.micrometer.micrometer-core-1.3.1.jar:lib/jakarta.activation.jakarta.activation-api-1.2.1.jar:lib/io.vertx.vertx-core-3.8.5.jar:lib/io.netty.netty-codec-dns-4.1.45.Final.jar:lib/io.fabric8.kubernetes-model-4.6.4.jar:lib/io.netty.netty-codec-socks-4.1.45.Final.jar:lib/com.github.mifmif.generex-1.0.2.jar:lib/org.apache.zookeeper.zookeeper-jute-3.5.6.jar:lib/io.netty.netty-resolver-4.1.45.Final.jar:lib/io.netty.netty-handler-proxy-4.1.45.Final.jar:lib/com.squareup.okhttp3.logging-interceptor-3.12.6.jar:lib/io.netty.netty-transport-native-unix-common-4.1.45.Final.jar:lib/org.apache.zookeeper.zookeeper-3.5.6.jar:lib/dk.brics.automaton.automaton-1.11-8.jar:lib/io.vertx.vertx-micrometer-metrics-3.8.5.jar:lib/com.fasterxml.jackson.core.jackson-core-2.10.2.jar:lib/io.netty.netty-transport-4.1.45.Final.jar:lib/io.netty.netty-transport-native-epoll-4.1.45.Final.jar:lib/jakarta.xml.bind.jakarta.xml.bind-api-2.3.2.jar:lib/org.apache.logging.log4j.log4j-slf4j-impl-2.13.0.jar:lib/com.fasterxml.jackson.core.jackson-annotations-2.10.2.jar:lib/io.fabric8.zjsonpatch-0.3.0.jar:lib/org.lz4.lz4-java-1.6.0.jar:lib/io.fabric8.kubernetes-client-4.6.4.jar:lib/com.fasterxml.jackson.module.jackson-module-jaxb-annotations-2.10.2.jar:lib/io.strimzi.crd-annotations-0.18.0-SNAPSHOT.jar:lib/com.squareup.okhttp3.okhttp-3.12.6.jar:lib/io.netty.netty-codec-http2-4.1.45.Final.jar:lib/org.apache.logging.log4j.log4j-core-2.13.0.jar:lib/io.fabric8.kubernetes-model-common-4.6.4.jar:lib/com.fasterxml.jackson.core.jackson-databind-2.10.2.jar:lib/io.netty.netty-resolver-dns-4.1.45.Final.jar:lib/org.slf4j.slf4j-api-1.7.25.jar:lib/org.latencyutils.LatencyUtils-2.0.3.jar:lib/org.apache.kafka.kafka-clients-2.4.0.jar:lib/io.micrometer.micrometer-registry-prometheus-1.3.1.jar [2020-04-15 00:25:06,951] INFO [ker-thread-0] Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib [2020-04-15 00:25:06,951] INFO [ker-thread-0] Client environment:java.io.tmpdir=/tmp [2020-04-15 00:25:06,951] INFO [ker-thread-0] Client environment:java.compiler= [2020-04-15 00:25:06,951] INFO [ker-thread-0] Client environment:os.name=Linux [2020-04-15 00:25:06,951] INFO [ker-thread-0] Client environment:os.arch=amd64 [2020-04-15 00:25:06,951] INFO [ker-thread-0] Client environment:os.version=3.10.0-944.el7.x86_64 [2020-04-15 00:25:06,952] INFO [ker-thread-0] Client environment:user.name=? [2020-04-15 00:25:06,952] INFO [ker-thread-0] Client environment:user.home=? [2020-04-15 00:25:06,952] INFO [ker-thread-0] Client environment:user.dir=/opt/strimzi [2020-04-15 00:25:06,952] INFO [ker-thread-0] Client environment:os.memory.free=222MB [2020-04-15 00:25:06,952] INFO [ker-thread-0] Client environment:os.memory.max=3839MB [2020-04-15 00:25:06,952] INFO [ker-thread-0] Client environment:os.memory.total=241MB [2020-04-15 00:25:06,954] INFO [ker-thread-0] Initiating client connection, connectString=localhost:2181 sessionTimeout=20000 watcher=org.I0Itec.zkclient.ZkClient@53cfc720 [2020-04-15 00:25:06,959] INFO [ker-thread-0] Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation [2020-04-15 00:25:06,963] INFO [ker-thread-0] jute.maxbuffer value is 4194304 Bytes [2020-04-15 00:25:06,976] INFO [ker-thread-0] zookeeper.request.timeout value is 0. feature enabled= [2020-04-15 00:25:06,977] DEBUG [ker-thread-0] Awaiting connection to Zookeeper server [2020-04-15 00:25:06,977] INFO [ker-thread-0] Waiting for keeper state SyncConnected [2020-04-15 00:25:06,981] DEBUG [alhost:2181)] Canonicalized address to localhost [2020-04-15 00:25:06,983] INFO [alhost:2181)] Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) [2020-04-15 00:25:06,984] INFO [alhost:2181)] Socket error occurred: localhost/0:0:0:0:0:0:0:1:2181: Connection refused [2020-04-15 00:25:06,984] DEBUG [alhost:2181)] Ignoring exception during shutdown input java.nio.channels.ClosedChannelException: null at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:777) ~[?:1.8.0_242] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:412) ~[?:1.8.0_242] at org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:198) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1338) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] at org.apache.zookeeper.ClientCnxn$SendThread.cleanAndNotifyState(ClientCnxn.java:1276) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1254) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] [2020-04-15 00:25:06,989] DEBUG [alhost:2181)] Ignoring exception during shutdown output java.nio.channels.ClosedChannelException: null at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:794) ~[?:1.8.0_242] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:420) ~[?:1.8.0_242] at org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:205) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1338) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] at org.apache.zookeeper.ClientCnxn$SendThread.cleanAndNotifyState(ClientCnxn.java:1276) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1254) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] [2020-04-15 00:25:07,081] DEBUG [dminclient-1] [AdminClient clientId=adminclient-1] Completed connection to node -1. Fetching API versions. [2020-04-15 00:25:07,231] DEBUG [dminclient-1] [SslTransportLayer channelId=-1 key=sun.nio.ch.SelectionKeyImpl@6b8b596] SSL handshake completed successfully with peerHost 'my-cluster-source-kafka-bootstrap' peerPort 9091 peerPrincipal 'CN=my-cluster-source-kafka, O=io.strimzi' cipherSuite 'TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384' [2020-04-15 00:25:07,231] DEBUG [dminclient-1] [AdminClient clientId=adminclient-1] Successfully authenticated with my-cluster-source-kafka-bootstrap/172.30.71.138 [2020-04-15 00:25:07,232] DEBUG [dminclient-1] [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -1. [2020-04-15 00:25:07,370] DEBUG [dminclient-1] [AdminClient clientId=adminclient-1] Recorded API versions for node -1: (Produce(0): 0 to 8 [usable: 8], Fetch(1): 0 to 11 [usable: 11], ListOffsets(2): 0 to 5 [usable: 5], Metadata(3): 0 to 9 [usable: 9], LeaderAndIsr(4): 0 to 4 [usable: 4], StopReplica(5): 0 to 2 [usable: 2], UpdateMetadata(6): 0 to 6 [usable: 6], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 6 [usable: 6], FindCoordinator(10): 0 to 3 [usable: 3], JoinGroup(11): 0 to 6 [usable: 6], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 4 [usable: 4], SyncGroup(14): 0 to 4 [usable: 4], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 3 [usable: 3], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 5 [usable: 5], DeleteTopics(20): 0 to 4 [usable: 4], DeleteRecords(21): 0 to 1 [usable: 1], InitProducerId(22): 0 to 2 [usable: 2], OffsetForLeaderEpoch(23): 0 to 3 [usable: 3], AddPartitionsToTxn(24): 0 to 1 [usable: 1], AddOffsetsToTxn(25): 0 to 1 [usable: 1], EndTxn(26): 0 to 1 [usable: 1], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 to 2 [usable: 2], DescribeAcls(29): 0 to 1 [usable: 1], CreateAcls(30): 0 to 1 [usable: 1], DeleteAcls(31): 0 to 1 [usable: 1], DescribeConfigs(32): 0 to 2 [usable: 2], AlterConfigs(33): 0 to 1 [usable: 1], AlterReplicaLogDirs(34): 0 to 1 [usable: 1], DescribeLogDirs(35): 0 to 1 [usable: 1], SaslAuthenticate(36): 0 to 1 [usable: 1], CreatePartitions(37): 0 to 1 [usable: 1], CreateDelegationToken(38): 0 to 2 [usable: 2], RenewDelegationToken(39): 0 to 1 [usable: 1], ExpireDelegationToken(40): 0 to 1 [usable: 1], DescribeDelegationToken(41): 0 to 1 [usable: 1], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0]) [2020-04-15 00:25:07,391] DEBUG [dminclient-1] [AdminClient clientId=adminclient-1] Updating cluster metadata to Cluster(id = TABdUGrXQIWBip5IJnTBKA, nodes = [my-cluster-source-kafka-0.my-cluster-source-kafka-brokers.mirrormaker2-cluster-test.svc:9091 (id: 0 rack: null)], partitions = [], controller = my-cluster-source-kafka-0.my-cluster-source-kafka-brokers.mirrormaker2-cluster-test.svc:9091 (id: 0 rack: null)) [2020-04-15 00:25:08,092] DEBUG [alhost:2181)] Canonicalized address to localhost [2020-04-15 00:25:08,092] INFO [alhost:2181)] Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) [2020-04-15 00:25:08,093] INFO [alhost:2181)] Socket error occurred: localhost/0:0:0:0:0:0:0:1:2181: Connection refused [2020-04-15 00:25:08,093] DEBUG [alhost:2181)] Ignoring exception during shutdown input java.nio.channels.ClosedChannelException: null at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:777) ~[?:1.8.0_242] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:412) ~[?:1.8.0_242] at org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:198) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1338) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] at org.apache.zookeeper.ClientCnxn$SendThread.cleanAndNotifyState(ClientCnxn.java:1276) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1254) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] [2020-04-15 00:25:08,093] DEBUG [alhost:2181)] Ignoring exception during shutdown output java.nio.channels.ClosedChannelException: null at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:794) ~[?:1.8.0_242] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:420) ~[?:1.8.0_242] at org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:205) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1338) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] at org.apache.zookeeper.ClientCnxn$SendThread.cleanAndNotifyState(ClientCnxn.java:1276) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1254) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] [2020-04-15 00:25:09,194] DEBUG [alhost:2181)] Canonicalized address to localhost [2020-04-15 00:25:09,194] INFO [alhost:2181)] Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) [2020-04-15 00:25:09,195] INFO [alhost:2181)] Socket error occurred: localhost/0:0:0:0:0:0:0:1:2181: Connection refused [2020-04-15 00:25:09,195] DEBUG [alhost:2181)] Ignoring exception during shutdown input java.nio.channels.ClosedChannelException: null at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:777) ~[?:1.8.0_242] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:412) ~[?:1.8.0_242] at org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:198) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1338) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] at org.apache.zookeeper.ClientCnxn$SendThread.cleanAndNotifyState(ClientCnxn.java:1276) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1254) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] [2020-04-15 00:25:09,195] DEBUG [alhost:2181)] Ignoring exception during shutdown output java.nio.channels.ClosedChannelException: null at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:794) ~[?:1.8.0_242] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:420) ~[?:1.8.0_242] at org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:205) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1338) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] at org.apache.zookeeper.ClientCnxn$SendThread.cleanAndNotifyState(ClientCnxn.java:1276) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1254) [org.apache.zookeeper.zookeeper-3.5.6.jar:3.5.6] [2020-04-15 00:25:10,296] DEBUG [alhost:2181)] Canonicalized address to localhost [2020-04-15 00:25:10,296] INFO [alhost:2181)] Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) [2020-04-15 00:25:10,297] INFO [alhost:2181)] Socket connection established, initiating session, client: /127.0.0.1:35884, server: localhost/127.0.0.1:2181 [2020-04-15 00:25:10,299] DEBUG [alhost:2181)] Session establishment request sent on localhost/127.0.0.1:2181 [2020-04-15 00:25:10,307] INFO [alhost:2181)] Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1000140c8d70003, negotiated timeout = 20000 [2020-04-15 00:25:10,309] DEBUG [-EventThread] Received event: WatchedEvent state:SyncConnected type:None path:null [2020-04-15 00:25:10,309] INFO [-EventThread] zookeeper state changed (SyncConnected) [2020-04-15 00:25:10,309] DEBUG [-EventThread] Leaving process event [2020-04-15 00:25:10,309] DEBUG [ker-thread-0] State is SyncConnected [2020-04-15 00:25:10,313] DEBUG [oop-thread-0] Using ZooKeeper io.strimzi.operator.topic.zk.ZkImpl@1cc90419 [2020-04-15 00:25:10,323] DEBUG [oop-thread-0] Using TopicStore io.strimzi.operator.topic.ZkTopicStore@29d9b787 [2020-04-15 00:25:10,328] DEBUG [oop-thread-0] Using Operator io.strimzi.operator.topic.TopicOperator@7ac81f4a [2020-04-15 00:25:10,329] DEBUG [oop-thread-0] Using TopicConfigsWatcher io.strimzi.operator.topic.TopicConfigsWatcher@778c0a78 [2020-04-15 00:25:10,330] DEBUG [oop-thread-0] Using TopicWatcher io.strimzi.operator.topic.ZkTopicWatcher@208e8d4a [2020-04-15 00:25:10,330] DEBUG [oop-thread-0] Using TopicsWatcher io.strimzi.operator.topic.ZkTopicsWatcher@64cec84a [2020-04-15 00:25:10,335] DEBUG [oop-thread-0] Starting Thread[resource-watcher,5,main] [2020-04-15 00:25:10,338] INFO [oop-thread-0] Starting initial reconciliation [2020-04-15 00:25:10,339] DEBUG [oop-thread-0] Listing topics [2020-04-15 00:25:10,340] DEBUG [oop-thread-0] [AdminClient clientId=adminclient-1] Queueing Call(callName=listTopics, deadlineMs=1586910430340) with a timeout 120000 ms from now. [2020-04-15 00:25:10,342] DEBUG [dminclient-1] [AdminClient clientId=adminclient-1] Initiating connection to node my-cluster-source-kafka-0.my-cluster-source-kafka-brokers.mirrormaker2-cluster-test.svc:9091 (id: 0 rack: null) using address my-cluster-source-kafka-0.my-cluster-source-kafka-brokers.mirrormaker2-cluster-test.svc/172.17.0.13 [2020-04-15 00:25:10,343] DEBUG [dminclient-1] Added sensor with name node-0.bytes-sent [2020-04-15 00:25:10,344] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 1,1 replyHeader:: 1,4294967345,0 request:: '/strimzi,,v{s{15,s{'world,'anyone}}},0 response:: '/strimzi [2020-04-15 00:25:10,344] DEBUG [dminclient-1] Added sensor with name node-0.bytes-received [2020-04-15 00:25:10,345] DEBUG [dminclient-1] Added sensor with name node-0.latency [2020-04-15 00:25:10,345] DEBUG [dminclient-1] [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 0 [2020-04-15 00:25:10,346] DEBUG [urce-watcher] Watching KafkaTopics matching {strimzi.io/cluster=my-cluster-source} [2020-04-15 00:25:10,347] DEBUG [dminclient-1] [AdminClient clientId=adminclient-1] Completed connection to node 0. Fetching API versions. [2020-04-15 00:25:10,350] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 2,3 replyHeader:: 2,4294967345,0 request:: '/brokers/topics,T response:: s{4294967304,4294967304,1586910284232,1586910284232,0,0,0,0,0,0,4294967304} [2020-04-15 00:25:10,360] INFO [oop-thread-0] Started [2020-04-15 00:25:10,364] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 3,8 replyHeader:: 3,4294967345,0 request:: '/brokers/topics,T response:: v{} [2020-04-15 00:25:10,394] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 4,1 replyHeader:: 4,4294967346,0 request:: '/strimzi/topics,,v{s{15,s{'world,'anyone}}},0 response:: '/strimzi/topics [2020-04-15 00:25:10,395] DEBUG [dminclient-1] [SslTransportLayer channelId=0 key=sun.nio.ch.SelectionKeyImpl@6278c21c] SSL handshake completed successfully with peerHost 'my-cluster-source-kafka-0.my-cluster-source-kafka-brokers.mirrormaker2-cluster-test.svc' peerPort 9091 peerPrincipal 'CN=my-cluster-source-kafka, O=io.strimzi' cipherSuite 'TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384' [2020-04-15 00:25:10,395] DEBUG [dminclient-1] [AdminClient clientId=adminclient-1] Successfully authenticated with my-cluster-source-kafka-0.my-cluster-source-kafka-brokers.mirrormaker2-cluster-test.svc/172.17.0.13 [2020-04-15 00:25:10,396] DEBUG [dminclient-1] [AdminClient clientId=adminclient-1] Initiating API versions fetch from node 0. [2020-04-15 00:25:10,406] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 5,8 replyHeader:: 5,4294967346,0 request:: '/brokers/topics,T response:: v{} [2020-04-15 00:25:10,406] DEBUG [dminclient-1] [AdminClient clientId=adminclient-1] Recorded API versions for node 0: (Produce(0): 0 to 8 [usable: 8], Fetch(1): 0 to 11 [usable: 11], ListOffsets(2): 0 to 5 [usable: 5], Metadata(3): 0 to 9 [usable: 9], LeaderAndIsr(4): 0 to 4 [usable: 4], StopReplica(5): 0 to 2 [usable: 2], UpdateMetadata(6): 0 to 6 [usable: 6], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 6 [usable: 6], FindCoordinator(10): 0 to 3 [usable: 3], JoinGroup(11): 0 to 6 [usable: 6], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 4 [usable: 4], SyncGroup(14): 0 to 4 [usable: 4], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 3 [usable: 3], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 5 [usable: 5], DeleteTopics(20): 0 to 4 [usable: 4], DeleteRecords(21): 0 to 1 [usable: 1], InitProducerId(22): 0 to 2 [usable: 2], OffsetForLeaderEpoch(23): 0 to 3 [usable: 3], AddPartitionsToTxn(24): 0 to 1 [usable: 1], AddOffsetsToTxn(25): 0 to 1 [usable: 1], EndTxn(26): 0 to 1 [usable: 1], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 to 2 [usable: 2], DescribeAcls(29): 0 to 1 [usable: 1], CreateAcls(30): 0 to 1 [usable: 1], DeleteAcls(31): 0 to 1 [usable: 1], DescribeConfigs(32): 0 to 2 [usable: 2], AlterConfigs(33): 0 to 1 [usable: 1], AlterReplicaLogDirs(34): 0 to 1 [usable: 1], DescribeLogDirs(35): 0 to 1 [usable: 1], SaslAuthenticate(36): 0 to 1 [usable: 1], CreatePartitions(37): 0 to 1 [usable: 1], CreateDelegationToken(38): 0 to 2 [usable: 2], RenewDelegationToken(39): 0 to 1 [usable: 1], ExpireDelegationToken(40): 0 to 1 [usable: 1], DescribeDelegationToken(41): 0 to 1 [usable: 1], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0]) [2020-04-15 00:25:10,415] DEBUG [oop-thread-0] Setting initial children [] [2020-04-15 00:25:10,439] DEBUG [oop-thread-0] Future KafkaFuture{value=[],exception=null,done=true} has result [] [2020-04-15 00:25:10,441] DEBUG [oop-thread-0] Reconciling kafka topics [] [2020-04-15 00:25:10,444] DEBUG [oop-thread-0] Handler for work listTopics1141633917 executed ok [2020-04-15 00:25:10,579] DEBUG [urce-watcher] Connecting websocket ... io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager@3475d5f0 [2020-04-15 00:25:10,581] WARN [s-ops-tool-0] The client is using resource type 'kafkatopics' with unstable version 'v1beta1' [2020-04-15 00:25:10,705] DEBUG [2.30.0.1/...] WebSocket successfully opened [2020-04-15 00:25:10,705] DEBUG [urce-watcher] Watching setup [2020-04-15 00:25:10,767] DEBUG [urce-watcher] -Dio.netty.allocator.numHeapArenas: 2 [2020-04-15 00:25:10,767] DEBUG [urce-watcher] -Dio.netty.allocator.numDirectArenas: 2 [2020-04-15 00:25:10,767] DEBUG [urce-watcher] -Dio.netty.allocator.pageSize: 8192 [2020-04-15 00:25:10,767] DEBUG [urce-watcher] -Dio.netty.allocator.maxOrder: 11 [2020-04-15 00:25:10,767] DEBUG [urce-watcher] -Dio.netty.allocator.chunkSize: 16777216 [2020-04-15 00:25:10,767] DEBUG [urce-watcher] -Dio.netty.allocator.tinyCacheSize: 512 [2020-04-15 00:25:10,767] DEBUG [urce-watcher] -Dio.netty.allocator.smallCacheSize: 256 [2020-04-15 00:25:10,767] DEBUG [urce-watcher] -Dio.netty.allocator.normalCacheSize: 64 [2020-04-15 00:25:10,767] DEBUG [urce-watcher] -Dio.netty.allocator.maxCachedBufferCapacity: 32768 [2020-04-15 00:25:10,767] DEBUG [urce-watcher] -Dio.netty.allocator.cacheTrimInterval: 8192 [2020-04-15 00:25:10,767] DEBUG [urce-watcher] -Dio.netty.allocator.cacheTrimIntervalMillis: 0 [2020-04-15 00:25:10,768] DEBUG [urce-watcher] -Dio.netty.allocator.useCacheForAllThreads: true [2020-04-15 00:25:10,768] DEBUG [urce-watcher] -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023 [2020-04-15 00:25:10,912] DEBUG [urce-watcher] -Dio.netty.processId: 26 (auto-detected) [2020-04-15 00:25:10,915] DEBUG [urce-watcher] -Dio.netty.machineId: 02:42:ac:ff:fe:11:00:0e (auto-detected) [2020-04-15 00:25:10,936] DEBUG [2.30.0.1/...] Ignoring initial event for KafkaTopic mm2-offset-syncs.my-cluster-target.internal during initial reconcile [2020-04-15 00:25:10,937] DEBUG [oop-thread-0] 1|initial kube availability-topic-source-436735826: Concurrent modification in kube: new version 124031 [2020-04-15 00:25:10,938] DEBUG [2.30.0.1/...] Ignoring initial event for KafkaTopic my-topic-test-1 during initial reconcile [2020-04-15 00:25:10,939] DEBUG [2.30.0.1/...] Ignoring initial event for KafkaTopic availability-topic-source-436735826 during initial reconcile [2020-04-15 00:25:10,940] DEBUG [oop-thread-0] 1|initial kube availability-topic-source-436735826|124031: Topic availability-topic-source-436735826 exists in Kafka, but not Kubernetes [2020-04-15 00:25:11,006] DEBUG [oop-thread-0] 1|initial kube availability-topic-source-436735826|124031: Queuing action reconcile-with-kube on topic availability-topic-source-436735826 [2020-04-15 00:25:11,007] DEBUG [oop-thread-0] 1|initial kube availability-topic-source-436735826|124031: Adding first waiter reconcile-with-kube [2020-04-15 00:25:11,016] DEBUG [oop-thread-0] 4|initial kube mm2-offset-syncs.my-cluster-target.internal: Concurrent modification in kube: new version 124565 [2020-04-15 00:25:11,017] DEBUG [oop-thread-0] 4|initial kube mm2-offset-syncs.my-cluster-target.internal|124565: Topic mm2-offset-syncs.my-cluster-target.internal exists in Kafka, but not Kubernetes [2020-04-15 00:25:11,017] DEBUG [oop-thread-0] 4|initial kube mm2-offset-syncs.my-cluster-target.internal|124565: Queuing action reconcile-with-kube on topic mm2-offset-syncs.my-cluster-target.internal [2020-04-15 00:25:11,017] DEBUG [oop-thread-0] 4|initial kube mm2-offset-syncs.my-cluster-target.internal|124565: Adding first waiter reconcile-with-kube [2020-04-15 00:25:11,017] DEBUG [oop-thread-0] 5|initial kube my-topic-test-1: Concurrent modification in kube: new version 130083 [2020-04-15 00:25:11,017] DEBUG [oop-thread-0] 5|initial kube my-topic-test-1|130083: Topic my-topic-test-1 exists in Kafka, but not Kubernetes [2020-04-15 00:25:11,017] DEBUG [oop-thread-0] 5|initial kube my-topic-test-1|130083: Queuing action reconcile-with-kube on topic my-topic-test-1 [2020-04-15 00:25:11,018] DEBUG [oop-thread-0] 5|initial kube my-topic-test-1|130083: Adding first waiter reconcile-with-kube [2020-04-15 00:25:11,032] DEBUG [oop-thread-0] 1|initial kube availability-topic-source-436735826|124031: Lock acquired [2020-04-15 00:25:11,032] DEBUG [urce-watcher] -Dio.netty.allocator.type: pooled [2020-04-15 00:25:11,032] DEBUG [oop-thread-0] 1|initial kube availability-topic-source-436735826|124031: Executing action reconcile-with-kube on topic availability-topic-source-436735826 [2020-04-15 00:25:11,032] DEBUG [urce-watcher] -Dio.netty.threadLocalDirectBufferSize: 0 [2020-04-15 00:25:11,033] DEBUG [urce-watcher] -Dio.netty.maxThreadLocalCharBufferSize: 16384 [2020-04-15 00:25:11,035] DEBUG [oop-thread-0] Getting metadata for topic availability-topic-source-436735826 [2020-04-15 00:25:11,037] DEBUG [oop-thread-0] [AdminClient clientId=adminclient-1] Queueing Call(callName=describeTopics, deadlineMs=1586910431037) with a timeout 120000 ms from now. [2020-04-15 00:25:11,038] DEBUG [oop-thread-0] [AdminClient clientId=adminclient-1] Queueing Call(callName=describeConfigs, deadlineMs=1586910431038) with a timeout 120000 ms from now. [2020-04-15 00:25:11,042] DEBUG [oop-thread-0] 4|initial kube mm2-offset-syncs.my-cluster-target.internal|124565: Lock acquired [2020-04-15 00:25:11,042] DEBUG [oop-thread-0] 4|initial kube mm2-offset-syncs.my-cluster-target.internal|124565: Executing action reconcile-with-kube on topic mm2-offset-syncs.my-cluster-target.internal [2020-04-15 00:25:11,043] DEBUG [oop-thread-0] Getting metadata for topic mm2-offset-syncs.my-cluster-target.internal [2020-04-15 00:25:11,043] DEBUG [oop-thread-0] [AdminClient clientId=adminclient-1] Queueing Call(callName=describeTopics, deadlineMs=1586910431043) with a timeout 120000 ms from now. [2020-04-15 00:25:11,043] DEBUG [oop-thread-0] [AdminClient clientId=adminclient-1] Queueing Call(callName=describeConfigs, deadlineMs=1586910431043) with a timeout 120000 ms from now. [2020-04-15 00:25:11,044] DEBUG [oop-thread-0] 5|initial kube my-topic-test-1|130083: Lock acquired [2020-04-15 00:25:11,044] DEBUG [oop-thread-0] 5|initial kube my-topic-test-1|130083: Executing action reconcile-with-kube on topic my-topic-test-1 [2020-04-15 00:25:11,044] DEBUG [oop-thread-0] Getting metadata for topic my-topic-test-1 [2020-04-15 00:25:11,044] DEBUG [oop-thread-0] [AdminClient clientId=adminclient-1] Queueing Call(callName=describeTopics, deadlineMs=1586910431044) with a timeout 120000 ms from now. [2020-04-15 00:25:11,044] DEBUG [oop-thread-0] [AdminClient clientId=adminclient-1] Queueing Call(callName=describeConfigs, deadlineMs=1586910431044) with a timeout 120000 ms from now. [2020-04-15 00:25:11,047] INFO
[oop-thread-1] Session deployed [2020-04-15 00:25:11,069] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 6,4 replyHeader:: 6,4294967346,-101 request:: '/strimzi/topics/availability-topic-source-436735826,F response:: [2020-04-15 00:25:11,084] INFO [oop-thread-0] 1|initial kube availability-topic-source-436735826|124031: Reconciling topic availability-topic-source-436735826, k8sTopic:nonnull, kafkaTopic:null, privateTopic:null [2020-04-15 00:25:11,084] DEBUG [oop-thread-0] 1|initial kube availability-topic-source-436735826|124031: KafkaTopic created in k8s, will create topic in kafka and topicStore [2020-04-15 00:25:11,086] DEBUG [oop-thread-0] Enqueuing event CreateKafkaTopic(topicName=availability-topic-source-436735826,ctx=1|initial kube availability-topic-source-436735826|124031) [2020-04-15 00:25:11,092] DEBUG [oop-thread-0] Creating topic (name=availability-topic-source-436735826, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={message.format.version=2.4-IV1}) [2020-04-15 00:25:11,095] DEBUG [oop-thread-0] [AdminClient clientId=adminclient-1] Queueing Call(callName=createTopics, deadlineMs=1586910431094) with a timeout 120000 ms from now. [2020-04-15 00:25:11,106] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 7,4 replyHeader:: 7,4294967346,-101 request:: '/strimzi/topics/mm2-offset-syncs.my-cluster-target.internal,F response:: [2020-04-15 00:25:11,109] INFO [oop-thread-0] 4|initial kube mm2-offset-syncs.my-cluster-target.internal|124565: Reconciling topic mm2-offset-syncs.my-cluster-target.internal, k8sTopic:nonnull, kafkaTopic:null, privateTopic:null [2020-04-15 00:25:11,110] DEBUG [oop-thread-0] 4|initial kube mm2-offset-syncs.my-cluster-target.internal|124565: KafkaTopic created in k8s, will create topic in kafka and topicStore [2020-04-15 00:25:11,110] DEBUG [oop-thread-0] Enqueuing event CreateKafkaTopic(topicName=mm2-offset-syncs.my-cluster-target.internal,ctx=4|initial kube mm2-offset-syncs.my-cluster-target.internal|124565) [2020-04-15 00:25:11,110] DEBUG [oop-thread-0] Creating topic (name=mm2-offset-syncs.my-cluster-target.internal, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact, message.format.version=2.4-IV1}) [2020-04-15 00:25:11,110] DEBUG [oop-thread-0] [AdminClient clientId=adminclient-1] Queueing Call(callName=createTopics, deadlineMs=1586910431110) with a timeout 120000 ms from now. [2020-04-15 00:25:11,115] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 8,4 replyHeader:: 8,4294967346,-101 request:: '/strimzi/topics/my-topic-test-1,F response:: [2020-04-15 00:25:11,116] INFO [oop-thread-0] 5|initial kube my-topic-test-1|130083: Reconciling topic my-topic-test-1, k8sTopic:nonnull, kafkaTopic:null, privateTopic:null [2020-04-15 00:25:11,116] DEBUG [oop-thread-0] 5|initial kube my-topic-test-1|130083: KafkaTopic created in k8s, will create topic in kafka and topicStore [2020-04-15 00:25:11,116] DEBUG [oop-thread-0] Enqueuing event CreateKafkaTopic(topicName=my-topic-test-1,ctx=5|initial kube my-topic-test-1|130083) [2020-04-15 00:25:11,117] DEBUG [oop-thread-0] Creating topic (name=my-topic-test-1, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={message.format.version=2.4-IV1}) [2020-04-15 00:25:11,117] DEBUG [oop-thread-0] [AdminClient clientId=adminclient-1] Queueing Call(callName=createTopics, deadlineMs=1586910431117) with a timeout 120000 ms from now. [2020-04-15 00:25:11,161] DEBUG [alhost:2181)] Got notification sessionid:0x1000140c8d70003 [2020-04-15 00:25:11,162] DEBUG [alhost:2181)] Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for sessionid 0x1000140c8d70003 [2020-04-15 00:25:11,163] DEBUG [-EventThread] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics [2020-04-15 00:25:11,163] DEBUG [-EventThread] New event: ZkEvent[Children of /brokers/topics changed sent to io.strimzi.operator.topic.zk.ZkImpl$$Lambda$208/197790614@265e3b2a] [2020-04-15 00:25:11,163] DEBUG [-EventThread] Leaving process event [2020-04-15 00:25:11,163] DEBUG [calhost:2181] Delivering event #1 ZkEvent[Children of /brokers/topics changed sent to io.strimzi.operator.topic.zk.ZkImpl$$Lambda$208/197790614@265e3b2a] [2020-04-15 00:25:11,175] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 9,3 replyHeader:: 9,4294967349,0 request:: '/brokers/topics,T response:: s{4294967304,4294967304,1586910284232,1586910284232,0,1,0,0,0,1,4294967349} [2020-04-15 00:25:25,142] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 8ms [2020-04-15 00:25:31,803] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 1ms [2020-04-15 00:25:38,475] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 1ms [2020-04-15 00:25:58,487] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 1ms [2020-04-15 00:26:25,191] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 2ms [2020-04-15 00:26:31,865] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 2ms [2020-04-15 00:26:38,524] INFO [2.30.0.1/...] 9|kube +mirrormaker2-topic-example-614585750|132533: event ADDED on resource mirrormaker2-topic-example-614585750 generation=1, labels={strimzi.io/cluster=my-cluster-source} [2020-04-15 00:26:38,527] DEBUG [2.30.0.1/...] 9|kube +mirrormaker2-topic-example-614585750|132533: Queuing action onResourceEvent on topic mirrormaker2-topic-example-614585750 [2020-04-15 00:26:38,527] DEBUG [2.30.0.1/...] 9|kube +mirrormaker2-topic-example-614585750|132533: Adding first waiter onResourceEvent [2020-04-15 00:26:38,529] DEBUG [oop-thread-1] 9|kube +mirrormaker2-topic-example-614585750|132533: Lock acquired [2020-04-15 00:26:38,529] DEBUG [oop-thread-1] 9|kube +mirrormaker2-topic-example-614585750|132533: Executing action onResourceEvent on topic mirrormaker2-topic-example-614585750 [2020-04-15 00:26:38,537] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 1ms [2020-04-15 00:26:38,546] DEBUG [oop-thread-1] 9|kube +mirrormaker2-topic-example-614585750|132533: last updated generation=null [2020-04-15 00:26:38,546] DEBUG [oop-thread-1] 9|kube +mirrormaker2-topic-example-614585750|132533: modifiedTopic.getMetadata().getGeneration()=1 [2020-04-15 00:26:38,546] DEBUG [oop-thread-1] Getting metadata for topic mirrormaker2-topic-example-614585750 [2020-04-15 00:26:38,547] DEBUG [oop-thread-1] [AdminClient clientId=adminclient-1] Queueing Call(callName=describeTopics, deadlineMs=1586910518547) with a timeout 120000 ms from now. [2020-04-15 00:26:38,547] DEBUG [oop-thread-1] [AdminClient clientId=adminclient-1] Queueing Call(callName=describeConfigs, deadlineMs=1586910518547) with a timeout 120000 ms from now. [2020-04-15 00:26:38,556] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 33,4 replyHeader:: 33,4294967377,-101 request:: '/strimzi/topics/mirrormaker2-topic-example-614585750,F response:: [2020-04-15 00:26:38,572] INFO [oop-thread-1] 9|kube +mirrormaker2-topic-example-614585750|132533: Reconciling topic mirrormaker2-topic-example-614585750, k8sTopic:nonnull, kafkaTopic:null, privateTopic:null [2020-04-15 00:26:38,572] DEBUG [oop-thread-1] 9|kube +mirrormaker2-topic-example-614585750|132533: KafkaTopic created in k8s, will create topic in kafka and topicStore [2020-04-15 00:26:38,572] DEBUG [oop-thread-1] Enqueuing event CreateKafkaTopic(topicName=mirrormaker2-topic-example-614585750,ctx=9|kube +mirrormaker2-topic-example-614585750|132533) [2020-04-15 00:26:38,572] DEBUG [oop-thread-1] Creating topic (name=mirrormaker2-topic-example-614585750, numPartitions=3, replicationFactor=1, replicasAssignments=null, configs={min.insync.replicas=1, retention.ms=7200000, segment.bytes=1073741824}) [2020-04-15 00:26:38,573] DEBUG [oop-thread-1] [AdminClient clientId=adminclient-1] Queueing Call(callName=createTopics, deadlineMs=1586910518572) with a timeout 120000 ms from now. [2020-04-15 00:26:38,669] DEBUG [alhost:2181)] Got notification sessionid:0x1000140c8d70003 [2020-04-15 00:26:38,669] DEBUG [alhost:2181)] Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for sessionid 0x1000140c8d70003 [2020-04-15 00:26:38,669] DEBUG [-EventThread] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics [2020-04-15 00:26:38,669] DEBUG [-EventThread] New event: ZkEvent[Children of /brokers/topics changed sent to io.strimzi.operator.topic.zk.ZkImpl$$Lambda$208/197790614@265e3b2a] [2020-04-15 00:26:38,669] DEBUG [-EventThread] Leaving process event [2020-04-15 00:26:38,670] DEBUG [calhost:2181] Delivering event #4 ZkEvent[Children of /brokers/topics changed sent to io.strimzi.operator.topic.zk.ZkImpl$$Lambda$208/197790614@265e3b2a] [2020-04-15 00:26:38,678] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 34,3 replyHeader:: 34,4294967380,0 request:: '/brokers/topics,T response:: s{4294967304,4294967304,1586910284232,1586910284232,0,4,0,0,0,4,4294967380} [2020-04-15 00:26:38,682] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 35,8 replyHeader:: 35,4294967380,0 request:: '/brokers/topics,T response:: v{'availability-topic-source-436735826,'mm2-offset-syncs.my-cluster-target.internal,'mirrormaker2-topic-example-614585750,'my-topic-test-1} [2020-04-15 00:26:38,682] DEBUG [calhost:2181] znode /brokers/topics now has children [availability-topic-source-436735826, mm2-offset-syncs.my-cluster-target.internal, mirrormaker2-topic-example-614585750, my-topic-test-1], previous children [availability-topic-source-436735826, mm2-offset-syncs.my-cluster-target.internal, my-topic-test-1] [2020-04-15 00:26:38,682] INFO [calhost:2181] Created topics: [mirrormaker2-topic-example-614585750] [2020-04-15 00:26:38,682] DEBUG [calhost:2181] Watching znode /config/topics/mirrormaker2-topic-example-614585750 for changes [2020-04-15 00:26:38,682] DEBUG [calhost:2181] Watching znode /brokers/topics/mirrormaker2-topic-example-614585750 for changes [2020-04-15 00:26:38,683] DEBUG [calhost:2181] 10|/brokers/topics +mirrormaker2-topic-example-614585750: Queuing action onTopicCreated on topic mirrormaker2-topic-example-614585750 [2020-04-15 00:26:38,683] DEBUG [calhost:2181] 10|/brokers/topics +mirrormaker2-topic-example-614585750: Adding waiter onTopicCreated: 2 [2020-04-15 00:26:38,683] DEBUG [calhost:2181] Delivering event #4 done [2020-04-15 00:26:38,691] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 36,3 replyHeader:: 36,4294967381,0 request:: '/config/topics/mirrormaker2-topic-example-614585750,T response:: s{4294967379,4294967379,1586910398649,1586910398649,0,0,0,0,104,0,4294967379} [2020-04-15 00:26:38,692] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 37,3 replyHeader:: 37,4294967381,0 request:: '/brokers/topics/mirrormaker2-topic-example-614585750,T response:: s{4294967380,4294967380,1586910398659,1586910398659,0,1,0,0,96,1,4294967381} [2020-04-15 00:26:38,692] DEBUG [.zk.ZkImpl-2] Subscribed data changes for /config/topics/mirrormaker2-topic-example-614585750 [2020-04-15 00:26:38,692] DEBUG [.zk.ZkImpl-1] Subscribed data changes for /brokers/topics/mirrormaker2-topic-example-614585750 [2020-04-15 00:26:55,111] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 1ms [2020-04-15 00:26:57,894] DEBUG [alhost:2181)] Got notification sessionid:0x1000140c8d70003 [2020-04-15 00:26:57,894] DEBUG [alhost:2181)] Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for sessionid 0x1000140c8d70003 [2020-04-15 00:26:57,895] DEBUG [-EventThread] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics [2020-04-15 00:26:57,895] DEBUG [-EventThread] New event: ZkEvent[Children of /brokers/topics changed sent to io.strimzi.operator.topic.zk.ZkImpl$$Lambda$208/197790614@265e3b2a] [2020-04-15 00:26:57,895] DEBUG [-EventThread] Leaving process event [2020-04-15 00:26:57,895] DEBUG [calhost:2181] Delivering event #5 ZkEvent[Children of /brokers/topics changed sent to io.strimzi.operator.topic.zk.ZkImpl$$Lambda$208/197790614@265e3b2a] [2020-04-15 00:26:57,898] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 47,3 replyHeader:: 47,4294967402,0 request:: '/brokers/topics,T response:: s{4294967304,4294967304,1586910284232,1586910284232,0,5,0,0,0,5,4294967402} [2020-04-15 00:26:57,903] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 48,8 replyHeader:: 48,4294967402,0 request:: '/brokers/topics,T response:: v{'availability-topic-source-436735826,'availability-topic-source-923068938,'mm2-offset-syncs.my-cluster-target.internal,'mirrormaker2-topic-example-614585750,'my-topic-test-1} [2020-04-15 00:26:57,904] DEBUG [calhost:2181] znode /brokers/topics now has children [availability-topic-source-436735826, availability-topic-source-923068938, mm2-offset-syncs.my-cluster-target.internal, mirrormaker2-topic-example-614585750, my-topic-test-1], previous children [availability-topic-source-436735826, mm2-offset-syncs.my-cluster-target.internal, mirrormaker2-topic-example-614585750, my-topic-test-1] [2020-04-15 00:26:57,904] INFO [calhost:2181] Created topics: [availability-topic-source-923068938] [2020-04-15 00:26:57,904] DEBUG [calhost:2181] Watching znode /config/topics/availability-topic-source-923068938 for changes [2020-04-15 00:26:57,904] DEBUG [calhost:2181] Watching znode /brokers/topics/availability-topic-source-923068938 for changes [2020-04-15 00:26:57,904] DEBUG [calhost:2181] 21|/brokers/topics +availability-topic-source-923068938: Queuing action onTopicCreated on topic availability-topic-source-923068938 [2020-04-15 00:26:57,904] DEBUG [calhost:2181] 21|/brokers/topics +availability-topic-source-923068938: Adding first waiter onTopicCreated [2020-04-15 00:26:57,904] DEBUG [calhost:2181] Delivering event #5 done [2020-04-15 00:26:57,907] DEBUG [oop-thread-1] 21|/brokers/topics +availability-topic-source-923068938: Lock acquired [2020-04-15 00:26:57,907] DEBUG [oop-thread-1] 21|/brokers/topics +availability-topic-source-923068938: Executing action onTopicCreated on topic availability-topic-source-923068938 [2020-04-15 00:26:57,907] DEBUG [oop-thread-1] Getting metadata for topic availability-topic-source-923068938 [2020-04-15 00:26:57,907] DEBUG [oop-thread-1] [AdminClient clientId=adminclient-1] Queueing Call(callName=describeTopics, deadlineMs=1586910537907) with a timeout 120000 ms from now. [2020-04-15 00:26:57,907] DEBUG [oop-thread-1] [AdminClient clientId=adminclient-1] Queueing Call(callName=describeConfigs, deadlineMs=1586910537907) with a timeout 120000 ms from now. [2020-04-15 00:26:57,908] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 49,3 replyHeader:: 49,4294967402,0 request:: '/config/topics/availability-topic-source-923068938,T response:: s{4294967401,4294967401,1586910417887,1586910417887,0,0,0,0,25,0,4294967401} [2020-04-15 00:26:57,908] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 50,3 replyHeader:: 50,4294967402,0 request:: '/brokers/topics/availability-topic-source-923068938,T response:: s{4294967402,4294967402,1586910417891,1586910417891,0,0,0,0,80,0,4294967402} [2020-04-15 00:26:57,908] DEBUG [.zk.ZkImpl-1] Subscribed data changes for /brokers/topics/availability-topic-source-923068938 [2020-04-15 00:26:57,909] DEBUG [.zk.ZkImpl-2] Subscribed data changes for /config/topics/availability-topic-source-923068938 [2020-04-15 00:26:57,936] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 51,4 replyHeader:: 51,4294967405,0 request:: '/brokers/topics/availability-topic-source-923068938,T response:: #7b2276657273696f6e223a322c22706172746974696f6e73223a7b2230223a5b305d7d2c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d7d,s{4294967402,4294967402,1586910417891,1586910417891,0,1,0,0,80,1,4294967403} [2020-04-15 00:26:57,936] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 52,4 replyHeader:: 52,4294967405,0 request:: '/config/topics/availability-topic-source-923068938,T response:: #7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,s{4294967401,4294967401,1586910417887,1586910417887,0,0,0,0,25,0,4294967401} [2020-04-15 00:26:57,940] DEBUG [oop-thread-1] Backing off for 0ms on getting metadata for availability-topic-source-923068938 [2020-04-15 00:26:57,941] DEBUG [oop-thread-1] Getting metadata for topic availability-topic-source-923068938 [2020-04-15 00:26:57,941] DEBUG [oop-thread-1] [AdminClient clientId=adminclient-1] Queueing Call(callName=describeTopics, deadlineMs=1586910537941) with a timeout 120000 ms from now. [2020-04-15 00:26:57,941] DEBUG [oop-thread-1] [AdminClient clientId=adminclient-1] Queueing Call(callName=describeConfigs, deadlineMs=1586910537941) with a timeout 120000 ms from now. [2020-04-15 00:26:57,961] DEBUG [oop-thread-1] Backing off for 200ms on getting metadata for availability-topic-source-923068938 [2020-04-15 00:26:58,163] DEBUG [oop-thread-1] Getting metadata for topic availability-topic-source-923068938 [2020-04-15 00:26:58,163] DEBUG [oop-thread-1] [AdminClient clientId=adminclient-1] Queueing Call(callName=describeTopics, deadlineMs=1586910538163) with a timeout 120000 ms from now. [2020-04-15 00:26:58,163] DEBUG [oop-thread-1] [AdminClient clientId=adminclient-1] Queueing Call(callName=describeConfigs, deadlineMs=1586910538163) with a timeout 120000 ms from now. [2020-04-15 00:26:58,199] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 53,4 replyHeader:: 53,4294967405,-101 request:: '/strimzi/topics/availability-topic-source-923068938,F response:: [2020-04-15 00:26:58,312] INFO [oop-thread-1] 21|/brokers/topics +availability-topic-source-923068938: Reconciling topic availability-topic-source-923068938, k8sTopic:null, kafkaTopic:nonnull, privateTopic:null [2020-04-15 00:26:58,312] DEBUG [oop-thread-1] 21|/brokers/topics +availability-topic-source-923068938: topic created in kafka, will create KafkaTopic in k8s and topicStore [2020-04-15 00:26:58,315] DEBUG [oop-thread-1] Enqueuing event CreateResource(topicName=availability-topic-source-923068938,ctx=21|/brokers/topics +availability-topic-source-923068938) [2020-04-15 00:26:58,329] INFO [2.30.0.1/...] 22|kube +availability-topic-source-923068938|132670: event ADDED on resource availability-topic-source-923068938 generation=1, labels={strimzi.io/cluster=my-cluster-source} [2020-04-15 00:26:58,329] DEBUG [2.30.0.1/...] 22|kube +availability-topic-source-923068938|132670: Queuing action onResourceEvent on topic availability-topic-source-923068938 [2020-04-15 00:26:58,329] DEBUG [ker-thread-2] KafkaTopic availability-topic-source-923068938 created with version null->132670 [2020-04-15 00:26:58,329] DEBUG [2.30.0.1/...] 22|kube +availability-topic-source-923068938|132670: Adding waiter onResourceEvent: 2 [2020-04-15 00:26:58,330] DEBUG [oop-thread-1] Enqueuing event CreateInTopicStore(topicName=availability-topic-source-923068938,ctx=21|/brokers/topics +availability-topic-source-923068938) [2020-04-15 00:26:58,330] DEBUG [oop-thread-1] Executing CreateInTopicStore(topicName=availability-topic-source-923068938,ctx=21|/brokers/topics +availability-topic-source-923068938) [2020-04-15 00:26:58,331] DEBUG [oop-thread-1] create znode /strimzi/topics/availability-topic-source-923068938 [2020-04-15 00:26:58,390] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 54,1 replyHeader:: 54,4294967406,0 request:: '/strimzi/topics/availability-topic-source-923068938,#7b226d61702d6e616d65223a22617661696c6162696c6974792d746f7069632d736f757263652d393233303638393338222c22746f7069632d6e616d65223a22617661696c6162696c6974792d746f7069632d736f757263652d393233303638393338222c22706172746974696f6e73223a312c227265706c69636173223a312c22636f6e666967223a7b226d6573736167652e666f726d61742e76657273696f6e223a22322e342d495631227d7d,v{s{15,s{'world,'anyone}}},0 response:: '/strimzi/topics/availability-topic-source-923068938 [2020-04-15 00:26:58,391] DEBUG [oop-thread-1] Completing CreateInTopicStore(topicName=availability-topic-source-923068938,ctx=21|/brokers/topics +availability-topic-source-923068938) [2020-04-15 00:26:58,391] DEBUG [oop-thread-1] CreateInTopicStore(topicName=availability-topic-source-923068938,ctx=21|/brokers/topics +availability-topic-source-923068938) succeeded [2020-04-15 00:26:58,391] DEBUG [oop-thread-1] 21|/brokers/topics +availability-topic-source-923068938: Executing handler for action onTopicCreated on topic availability-topic-source-923068938 [2020-04-15 00:26:58,391] DEBUG [oop-thread-1] 21|/brokers/topics +availability-topic-source-923068938: There is a KafkaTopic to set status on, rv=132670, generation=1 [2020-04-15 00:26:58,392] DEBUG [oop-thread-1] Status differs: {"op":"replace","path":"/","value":{"conditions":[{"type":"Ready","status":"True","lastTransitionTime":"2020-04-15T00:26:58.391Z"}],"observedGeneration":1}} [2020-04-15 00:26:58,392] DEBUG [oop-thread-1] Current Status path / has value [2020-04-15 00:26:58,392] DEBUG [oop-thread-1] Desired Status path / has value [2020-04-15 00:26:58,409] INFO [2.30.0.1/...] 23|kube =availability-topic-source-923068938|132671: event MODIFIED on resource availability-topic-source-923068938 generation=1, labels={strimzi.io/cluster=my-cluster-source} [2020-04-15 00:26:58,409] DEBUG [2.30.0.1/...] 23|kube =availability-topic-source-923068938|132671: Queuing action onResourceEvent on topic availability-topic-source-923068938 [2020-04-15 00:26:58,410] DEBUG [2.30.0.1/...] 23|kube =availability-topic-source-923068938|132671: Adding waiter onResourceEvent: 3 [2020-04-15 00:26:58,410] DEBUG [oop-thread-1] 21|/brokers/topics +availability-topic-source-923068938: status was set rv=132671, generation=1, observedGeneration=1 [2020-04-15 00:26:58,410] DEBUG [oop-thread-1] 21|/brokers/topics +availability-topic-source-923068938: Success responding to creation of topic availability-topic-source-923068938 [2020-04-15 00:26:58,410] DEBUG [oop-thread-1] 21|/brokers/topics +availability-topic-source-923068938: Lock released [2020-04-15 00:26:58,410] DEBUG [oop-thread-1] 21|/brokers/topics +availability-topic-source-923068938: Removing waiter onTopicCreated, 2 waiters left [2020-04-15 00:26:58,410] DEBUG [oop-thread-1] 22|kube +availability-topic-source-923068938|132670: Lock acquired [2020-04-15 00:26:58,410] DEBUG [oop-thread-1] 22|kube +availability-topic-source-923068938|132670: Executing action onResourceEvent on topic availability-topic-source-923068938 [2020-04-15 00:26:58,416] DEBUG [oop-thread-1] 22|kube +availability-topic-source-923068938|132670: last updated generation=1 [2020-04-15 00:26:58,416] DEBUG [oop-thread-1] 22|kube +availability-topic-source-923068938|132670: Ignoring modification event caused by my own status update on availability-topic-source-923068938 [2020-04-15 00:26:58,416] DEBUG [oop-thread-1] 22|kube +availability-topic-source-923068938|132670: Executing handler for action onResourceEvent on topic availability-topic-source-923068938 [2020-04-15 00:26:58,416] DEBUG [oop-thread-1] 22|kube +availability-topic-source-923068938|132670: No KafkaTopic to set status [2020-04-15 00:26:58,416] INFO [oop-thread-1] 22|kube +availability-topic-source-923068938|132670: Success processing event ADDED on resource availability-topic-source-923068938 with labels {strimzi.io/cluster=my-cluster-source} [2020-04-15 00:26:58,417] DEBUG [oop-thread-1] 22|kube +availability-topic-source-923068938|132670: Lock released [2020-04-15 00:26:58,417] DEBUG [oop-thread-1] 22|kube +availability-topic-source-923068938|132670: Removing waiter onResourceEvent, 1 waiters left [2020-04-15 00:26:58,417] DEBUG [oop-thread-1] 23|kube =availability-topic-source-923068938|132671: Lock acquired [2020-04-15 00:26:58,417] DEBUG [oop-thread-1] 23|kube =availability-topic-source-923068938|132671: Executing action onResourceEvent on topic availability-topic-source-923068938 [2020-04-15 00:26:58,426] DEBUG [oop-thread-1] 23|kube =availability-topic-source-923068938|132671: last updated generation=1 [2020-04-15 00:26:58,427] DEBUG [oop-thread-1] 23|kube =availability-topic-source-923068938|132671: Ignoring modification event caused by my own status update on availability-topic-source-923068938 [2020-04-15 00:26:58,427] DEBUG [oop-thread-1] 23|kube =availability-topic-source-923068938|132671: Executing handler for action onResourceEvent on topic availability-topic-source-923068938 [2020-04-15 00:26:58,427] DEBUG [oop-thread-1] 23|kube =availability-topic-source-923068938|132671: No KafkaTopic to set status [2020-04-15 00:26:58,427] INFO [oop-thread-1] 23|kube =availability-topic-source-923068938|132671: Success processing event MODIFIED on resource availability-topic-source-923068938 with labels {strimzi.io/cluster=my-cluster-source} [2020-04-15 00:26:58,427] DEBUG [oop-thread-1] 23|kube =availability-topic-source-923068938|132671: Lock released [2020-04-15 00:26:58,427] DEBUG [oop-thread-1] 23|kube =availability-topic-source-923068938|132671: Removing last waiter onResourceEvent [2020-04-15 00:27:01,728] DEBUG [alhost:2181)] Got notification sessionid:0x1000140c8d70003 [2020-04-15 00:27:01,728] DEBUG [alhost:2181)] Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for sessionid 0x1000140c8d70003 [2020-04-15 00:27:01,729] DEBUG [-EventThread] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics [2020-04-15 00:27:01,729] DEBUG [-EventThread] New event: ZkEvent[Children of /brokers/topics changed sent to io.strimzi.operator.topic.zk.ZkImpl$$Lambda$208/197790614@265e3b2a] [2020-04-15 00:27:01,729] DEBUG [-EventThread] Leaving process event [2020-04-15 00:27:01,729] DEBUG [calhost:2181] Delivering event #6 ZkEvent[Children of /brokers/topics changed sent to io.strimzi.operator.topic.zk.ZkImpl$$Lambda$208/197790614@265e3b2a] [2020-04-15 00:27:01,751] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 55,3 replyHeader:: 55,4294967409,0 request:: '/brokers/topics,T response:: s{4294967304,4294967304,1586910284232,1586910284232,0,6,0,0,0,6,4294967409} [2020-04-15 00:27:01,754] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 56,8 replyHeader:: 56,4294967409,0 request:: '/brokers/topics,T response:: v{'availability-topic-source-436735826,'availability-topic-source-923068938,'mm2-offset-syncs.my-cluster-target.internal,'mirrormaker2-topic-example-614585750,'my-topic-test-1,'__consumer_offsets} [2020-04-15 00:27:01,754] DEBUG [calhost:2181] znode /brokers/topics now has children [availability-topic-source-436735826, availability-topic-source-923068938, mm2-offset-syncs.my-cluster-target.internal, mirrormaker2-topic-example-614585750, my-topic-test-1, __consumer_offsets], previous children [availability-topic-source-436735826, availability-topic-source-923068938, mm2-offset-syncs.my-cluster-target.internal, mirrormaker2-topic-example-614585750, my-topic-test-1] [2020-04-15 00:27:01,754] INFO [calhost:2181] Created topics: [__consumer_offsets] [2020-04-15 00:27:01,755] DEBUG [calhost:2181] Watching znode /config/topics/__consumer_offsets for changes [2020-04-15 00:27:01,755] DEBUG [calhost:2181] Watching znode /brokers/topics/__consumer_offsets for changes [2020-04-15 00:27:01,755] DEBUG [calhost:2181] 24|/brokers/topics +__consumer_offsets: Queuing action onTopicCreated on topic __consumer_offsets [2020-04-15 00:27:01,755] DEBUG [calhost:2181] 24|/brokers/topics +__consumer_offsets: Adding first waiter onTopicCreated [2020-04-15 00:27:01,756] DEBUG [calhost:2181] Delivering event #6 done [2020-04-15 00:27:01,757] DEBUG [oop-thread-1] 24|/brokers/topics +__consumer_offsets: Lock acquired [2020-04-15 00:27:01,757] DEBUG [oop-thread-1] 24|/brokers/topics +__consumer_offsets: Executing action onTopicCreated on topic __consumer_offsets [2020-04-15 00:27:01,757] DEBUG [oop-thread-1] Getting metadata for topic __consumer_offsets [2020-04-15 00:27:01,757] DEBUG [oop-thread-1] [AdminClient clientId=adminclient-1] Queueing Call(callName=describeTopics, deadlineMs=1586910541757) with a timeout 120000 ms from now. [2020-04-15 00:27:01,757] DEBUG [oop-thread-1] [AdminClient clientId=adminclient-1] Queueing Call(callName=describeConfigs, deadlineMs=1586910541757) with a timeout 120000 ms from now. [2020-04-15 00:27:01,758] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 57,3 replyHeader:: 57,4294967409,0 request:: '/config/topics/__consumer_offsets,T response:: s{4294967408,4294967408,1586910421715,1586910421715,0,0,0,0,109,0,4294967408} [2020-04-15 00:27:01,759] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 58,3 replyHeader:: 58,4294967409,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{4294967409,4294967409,1586910421723,1586910421723,0,0,0,0,512,0,4294967409} [2020-04-15 00:27:01,759] DEBUG [.zk.ZkImpl-0] Subscribed data changes for /brokers/topics/__consumer_offsets [2020-04-15 00:27:01,762] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 59,4 replyHeader:: 59,4294967409,0 request:: '/brokers/topics/__consumer_offsets,T response:: #7b2276657273696f6e223a322c22706172746974696f6e73223a7b223330223a5b305d2c223339223a5b305d2c223435223a5b305d2c2232223a5b305d2c2235223a5b305d2c223438223a5b305d2c223333223a5b305d2c223237223a5b305d2c223132223a5b305d2c2238223a5b305d2c223135223a5b305d2c223432223a5b305d2c223336223a5b305d2c223231223a5b305d2c223138223a5b305d2c223234223a5b305d2c223335223a5b305d2c223431223a5b305d2c2237223a5b305d2c223137223a5b305d2c2231223a5b305d2c223434223a5b305d2c223233223a5b305d2c223338223a5b305d2c223437223a5b305d2c2234223a5b305d2c223236223a5b305d2c223131223a5b305d2c223332223a5b305d2c223134223a5b305d2c223230223a5b305d2c223239223a5b305d2c223436223a5b305d2c223334223a5b305d2c223238223a5b305d2c2236223a5b305d2c223430223a5b305d2c223439223a5b305d2c2239223a5b305d2c223433223a5b305d2c2230223a5b305d2c223232223a5b305d2c223136223a5b305d2c223337223a5b305d2c223139223a5b305d2c2233223a5b305d2c223130223a5b305d2c223331223a5b305d2c223235223a5b305d2c223133223a5b305d7d2c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d7d,s{4294967409,4294967409,1586910421723,1586910421723,0,0,0,0,512,0,4294967409} [2020-04-15 00:27:01,767] DEBUG [.zk.ZkImpl-3] Subscribed data changes for /config/topics/__consumer_offsets [2020-04-15 00:27:01,772] DEBUG [alhost:2181)] Reading reply sessionid:0x1000140c8d70003, packet:: clientPath:null serverPath:null finished:false header:: 60,4 replyHeader:: 60,4294967410,0 request:: '/config/topics/__consumer_offsets,T response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{4294967408,4294967408,1586910421715,1586910421715,0,0,0,0,109,0,4294967408} [2020-04-15 00:27:01,803] DEBUG [oop-thread-1] Future KafkaFuture{value=null,exception=org.apache.kafka.common.errors.LeaderNotAvailableException: There is no leader for this topic-partition as we are in the middle of a leadership election.,done=true} threw java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.LeaderNotAvailableException: There is no leader for this topic-partition as we are in the middle of a leadership election. [2020-04-15 00:27:01,804] DEBUG [oop-thread-1] 24|/brokers/topics +__consumer_offsets: Executing handler for action onTopicCreated on topic __consumer_offsets [2020-04-15 00:27:01,804] DEBUG [oop-thread-1] 24|/brokers/topics +__consumer_offsets: No KafkaTopic to set status [2020-04-15 00:27:01,804] WARN [oop-thread-1] 24|/brokers/topics +__consumer_offsets: Error responding to creation of topic __consumer_offsets org.apache.kafka.common.errors.LeaderNotAvailableException: There is no leader for this topic-partition as we are in the middle of a leadership election. [2020-04-15 00:27:01,805] DEBUG [oop-thread-1] 24|/brokers/topics +__consumer_offsets: Lock released [2020-04-15 00:27:01,805] DEBUG [oop-thread-1] 24|/brokers/topics +__consumer_offsets: Removing last waiter onTopicCreated [2020-04-15 00:27:55,155] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 1ms [2020-04-15 00:28:25,273] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 1ms [2020-04-15 00:29:28,282] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 2ms [2020-04-15 00:29:55,448] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 1ms [2020-04-15 00:30:02,123] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 2ms [2020-04-15 00:30:28,800] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 1ms [2020-04-15 00:30:55,485] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 1ms [2020-04-15 00:31:25,699] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 1ms [2020-04-15 00:31:32,372] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 1ms [2020-04-15 00:32:25,744] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 1ms [2020-04-15 00:32:55,938] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 1ms [2020-04-15 00:33:02,611] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 0ms [2020-04-15 00:33:29,299] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 1ms [2020-04-15 00:33:55,987] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 1ms [2020-04-15 00:34:26,166] DEBUG [alhost:2181)] Got ping response for sessionid: 0x1000140c8d70003 after 5ms