➜ DEBEZIUM docker-compose up Starting debezium_zookeeper_1 ... done Starting debezium_mysql_1 ... done Starting debezium_kafka_1 ... done Starting debezium_connect_1 ... done Attaching to debezium_mysql_1, debezium_zookeeper_1, debezium_kafka_1, debezium_connect_1 zookeeper_1 | Starting up in standalone mode mysql_1 | 2022-04-21 13:24:24+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.28-1debian10 started. kafka_1 | WARNING: Using default NODE_ID=1, which is valid only for non-clustered installations. kafka_1 | Starting in ZooKeeper mode using NODE_ID=1. kafka_1 | Using ZOOKEEPER_CONNECT=zookeeper:2181 kafka_1 | Using configuration config/server.properties. zookeeper_1 | /usr/bin/java kafka_1 | Using KAFKA_LISTENERS=PLAINTEXT://172.19.0.4:9092 and KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://172.19.0.4:9092 mysql_1 | 2022-04-21 13:24:24+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' connect_1 | Using BOOTSTRAP_SERVERS=kafka:9092 mysql_1 | 2022-04-21 13:24:24+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.28-1debian10 started. zookeeper_1 | ZooKeeper JMX enabled by default zookeeper_1 | Using config: /zookeeper/conf/zoo.cfg mysql_1 | 2022-04-21T13:24:24.719711Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release. mysql_1 | 2022-04-21T13:24:24.719737Z 0 [Warning] [MY-011068] [Server] The syntax 'expire-logs-days' is deprecated and will be removed in a future release. Please use binlog_expire_logs_seconds instead. mysql_1 | 2022-04-21T13:24:24.719878Z 0 [Warning] [MY-010918] [Server] 'default_authentication_plugin' is deprecated and will be removed in a future release. Please use authentication_policy instead. mysql_1 | 2022-04-21T13:24:24.719907Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.28) starting as process 1 connect_1 | Plugins are loaded from /kafka/connect mysql_1 | 2022-04-21T13:24:24.731576Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. zookeeper_1 | 2022-04-21 13:24:24,937 - INFO [main:QuorumPeerConfig@174] - Reading configuration from: /zookeeper/conf/zoo.cfg zookeeper_1 | 2022-04-21 13:24:24,953 - INFO [main:QuorumPeerConfig@460] - clientPortAddress is 0.0.0.0:2181 zookeeper_1 | 2022-04-21 13:24:24,954 - INFO [main:QuorumPeerConfig@464] - secureClientPort is not set zookeeper_1 | 2022-04-21 13:24:24,954 - INFO [main:QuorumPeerConfig@480] - observerMasterPort is not set mysql_1 | 2022-04-21T13:24:25.064107Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. zookeeper_1 | 2022-04-21 13:24:24,955 - INFO [main:QuorumPeerConfig@497] - metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider zookeeper_1 | 2022-04-21 13:24:24,959 - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 zookeeper_1 | 2022-04-21 13:24:24,959 - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1 zookeeper_1 | 2022-04-21 13:24:24,960 - WARN [main:QuorumPeerMain@138] - Either no config or no quorum defined in config, running in standalone mode zookeeper_1 | 2022-04-21 13:24:24,960 - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@139] - Purge task started. zookeeper_1 | 2022-04-21 13:24:24,970 - INFO [main:ManagedUtil@44] - Log4j 1.2 jmx support found and enabled. zookeeper_1 | 2022-04-21 13:24:24,974 - INFO [PurgeTask:FileTxnSnapLog@124] - zookeeper.snapshot.trust.empty : false zookeeper_1 | 2022-04-21 13:24:24,983 - INFO [main:QuorumPeerConfig@174] - Reading configuration from: /zookeeper/conf/zoo.cfg zookeeper_1 | 2022-04-21 13:24:24,984 - INFO [main:QuorumPeerConfig@460] - clientPortAddress is 0.0.0.0:2181 zookeeper_1 | 2022-04-21 13:24:24,985 - INFO [main:QuorumPeerConfig@464] - secureClientPort is not set zookeeper_1 | 2022-04-21 13:24:24,985 - INFO [main:QuorumPeerConfig@480] - observerMasterPort is not set zookeeper_1 | 2022-04-21 13:24:24,985 - INFO [main:QuorumPeerConfig@497] - metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider zookeeper_1 | 2022-04-21 13:24:24,987 - INFO [main:ZooKeeperServerMain@122] - Starting server zookeeper_1 | 2022-04-21 13:24:24,988 - INFO [PurgeTask:SnapStream@61] - zookeeper.snapshot.compression.method = CHECKED zookeeper_1 | 2022-04-21 13:24:24,990 - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@145] - Purge task completed. zookeeper_1 | 2022-04-21 13:24:25,003 - INFO [main:ServerMetrics@62] - ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@157632c9 zookeeper_1 | 2022-04-21 13:24:25,005 - INFO [main:FileTxnSnapLog@124] - zookeeper.snapshot.trust.empty : false connect_1 | Using the following environment variables: connect_1 | GROUP_ID=1 connect_1 | CONFIG_STORAGE_TOPIC=my_connect_configs zookeeper_1 | 2022-04-21 13:24:25,012 - INFO [main:ZookeeperBanner@42] - zookeeper_1 | 2022-04-21 13:24:25,012 - INFO [main:ZookeeperBanner@42] - ______ _ zookeeper_1 | 2022-04-21 13:24:25,013 - INFO [main:ZookeeperBanner@42] - |___ / | | zookeeper_1 | 2022-04-21 13:24:25,013 - INFO [main:ZookeeperBanner@42] - / / ___ ___ | | __ ___ ___ _ __ ___ _ __ connect_1 | OFFSET_STORAGE_TOPIC=my_connect_offsets connect_1 | STATUS_STORAGE_TOPIC=my_connect_statuses connect_1 | BOOTSTRAP_SERVERS=kafka:9092 zookeeper_1 | 2022-04-21 13:24:25,013 - INFO [main:ZookeeperBanner@42] - / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| zookeeper_1 | 2022-04-21 13:24:25,013 - INFO [main:ZookeeperBanner@42] - / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | connect_1 | REST_HOST_NAME=172.19.0.5 connect_1 | REST_PORT=8083 connect_1 | ADVERTISED_HOST_NAME=172.19.0.5 connect_1 | ADVERTISED_PORT=8083 connect_1 | KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter connect_1 | VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter connect_1 | OFFSET_FLUSH_INTERVAL_MS=60000 connect_1 | OFFSET_FLUSH_TIMEOUT_MS=5000 connect_1 | SHUTDOWN_TIMEOUT=10000 zookeeper_1 | 2022-04-21 13:24:25,013 - INFO [main:ZookeeperBanner@42] - /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| zookeeper_1 | 2022-04-21 13:24:25,014 - INFO [main:ZookeeperBanner@42] - | | zookeeper_1 | 2022-04-21 13:24:25,014 - INFO [main:ZookeeperBanner@42] - |_| zookeeper_1 | 2022-04-21 13:24:25,014 - INFO [main:ZookeeperBanner@42] - zookeeper_1 | 2022-04-21 13:24:25,016 - INFO [main:Environment@98] - Server environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT zookeeper_1 | 2022-04-21 13:24:25,016 - INFO [main:Environment@98] - Server environment:host.name=e8489e169e55 zookeeper_1 | 2022-04-21 13:24:25,016 - INFO [main:Environment@98] - Server environment:java.version=11.0.14.1 zookeeper_1 | 2022-04-21 13:24:25,016 - INFO [main:Environment@98] - Server environment:java.vendor=Red Hat, Inc. zookeeper_1 | 2022-04-21 13:24:25,016 - INFO [main:Environment@98] - Server environment:java.home=/usr/lib/jvm/java-11-openjdk-11.0.14.1.1-5.fc34.x86_64 zookeeper_1 | 2022-04-21 13:24:25,016 - INFO [main:Environment@98] - Server environment:java.class.path=/zookeeper/bin/../zookeeper-server/target/classes:/zookeeper/bin/../build/classes:/zookeeper/bin/../zookeeper-server/target/lib/*.jar:/zookeeper/bin/../build/lib/*.jar:/zookeeper/lib/zookeeper-prometheus-metrics-3.6.3.jar:/zookeeper/lib/zookeeper-jute-3.6.3.jar:/zookeeper/lib/zookeeper-3.6.3.jar:/zookeeper/lib/snappy-java-1.1.7.jar:/zookeeper/lib/slf4j-log4j12-1.7.25.jar:/zookeeper/lib/slf4j-api-1.7.25.jar:/zookeeper/lib/simpleclient_servlet-0.6.0.jar:/zookeeper/lib/simpleclient_hotspot-0.6.0.jar:/zookeeper/lib/simpleclient_common-0.6.0.jar:/zookeeper/lib/simpleclient-0.6.0.jar:/zookeeper/lib/netty-transport-native-unix-common-4.1.63.Final.jar:/zookeeper/lib/netty-transport-native-epoll-4.1.63.Final.jar:/zookeeper/lib/netty-transport-4.1.63.Final.jar:/zookeeper/lib/netty-resolver-4.1.63.Final.jar:/zookeeper/lib/netty-handler-4.1.63.Final.jar:/zookeeper/lib/netty-common-4.1.63.Final.jar:/zookeeper/lib/netty-codec-4.1.63.Final.jar:/zookeeper/lib/netty-buffer-4.1.63.Final.jar:/zookeeper/lib/metrics-core-3.2.5.jar:/zookeeper/lib/log4j-1.2.17.jar:/zookeeper/lib/json-simple-1.1.1.jar:/zookeeper/lib/jline-2.14.6.jar:/zookeeper/lib/jetty-util-ajax-9.4.39.v20210325.jar:/zookeeper/lib/jetty-util-9.4.39.v20210325.jar:/zookeeper/lib/jetty-servlet-9.4.39.v20210325.jar:/zookeeper/lib/jetty-server-9.4.39.v20210325.jar:/zookeeper/lib/jetty-security-9.4.39.v20210325.jar:/zookeeper/lib/jetty-io-9.4.39.v20210325.jar:/zookeeper/lib/jetty-http-9.4.39.v20210325.jar:/zookeeper/lib/javax.servlet-api-3.1.0.jar:/zookeeper/lib/jackson-databind-2.10.5.1.jar:/zookeeper/lib/jackson-core-2.10.5.jar:/zookeeper/lib/jackson-annotations-2.10.5.jar:/zookeeper/lib/commons-cli-1.2.jar:/zookeeper/lib/audience-annotations-0.5.0.jar:/zookeeper/bin/../zookeeper-*.jar:/zookeeper/bin/../zookeeper-server/src/main/resources/lib/*.jar:/zookeeper/conf: zookeeper_1 | 2022-04-21 13:24:25,016 - INFO [main:Environment@98] - Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib zookeeper_1 | 2022-04-21 13:24:25,016 - INFO [main:Environment@98] - Server environment:java.io.tmpdir=/tmp zookeeper_1 | 2022-04-21 13:24:25,016 - INFO [main:Environment@98] - Server environment:java.compiler= zookeeper_1 | 2022-04-21 13:24:25,016 - INFO [main:Environment@98] - Server environment:os.name=Linux zookeeper_1 | 2022-04-21 13:24:25,016 - INFO [main:Environment@98] - Server environment:os.arch=amd64 zookeeper_1 | 2022-04-21 13:24:25,017 - INFO [main:Environment@98] - Server environment:os.version=5.8.0-59-generic zookeeper_1 | 2022-04-21 13:24:25,017 - INFO [main:Environment@98] - Server environment:user.name=zookeeper zookeeper_1 | 2022-04-21 13:24:25,017 - INFO [main:Environment@98] - Server environment:user.home=/zookeeper zookeeper_1 | 2022-04-21 13:24:25,017 - INFO [main:Environment@98] - Server environment:user.dir=/zookeeper zookeeper_1 | 2022-04-21 13:24:25,017 - INFO [main:Environment@98] - Server environment:os.memory.free=226MB zookeeper_1 | 2022-04-21 13:24:25,017 - INFO [main:Environment@98] - Server environment:os.memory.max=1000MB zookeeper_1 | 2022-04-21 13:24:25,017 - INFO [main:Environment@98] - Server environment:os.memory.total=242MB zookeeper_1 | 2022-04-21 13:24:25,017 - INFO [main:ZooKeeperServer@129] - zookeeper.enableEagerACLCheck = false zookeeper_1 | 2022-04-21 13:24:25,017 - INFO [main:ZooKeeperServer@137] - zookeeper.digest.enabled = true zookeeper_1 | 2022-04-21 13:24:25,017 - INFO [main:ZooKeeperServer@141] - zookeeper.closeSessionTxn.enabled = true zookeeper_1 | 2022-04-21 13:24:25,017 - INFO [main:ZooKeeperServer@1461] - zookeeper.flushDelay=0 zookeeper_1 | 2022-04-21 13:24:25,018 - INFO [main:ZooKeeperServer@1470] - zookeeper.maxWriteQueuePollTime=0 zookeeper_1 | 2022-04-21 13:24:25,018 - INFO [main:ZooKeeperServer@1479] - zookeeper.maxBatchSize=1000 zookeeper_1 | 2022-04-21 13:24:25,018 - INFO [main:ZooKeeperServer@243] - zookeeper.intBufferStartingSizeBytes = 1024 zookeeper_1 | 2022-04-21 13:24:25,019 - INFO [main:BlueThrottle@141] - Weighed connection throttling is disabled zookeeper_1 | 2022-04-21 13:24:25,021 - INFO [main:ZooKeeperServer@1273] - minSessionTimeout set to 4000 zookeeper_1 | 2022-04-21 13:24:25,022 - INFO [main:ZooKeeperServer@1282] - maxSessionTimeout set to 40000 zookeeper_1 | 2022-04-21 13:24:25,023 - INFO [main:ResponseCache@45] - Response cache size is initialized with value 400. zookeeper_1 | 2022-04-21 13:24:25,024 - INFO [main:ResponseCache@45] - Response cache size is initialized with value 400. zookeeper_1 | 2022-04-21 13:24:25,025 - INFO [main:RequestPathMetricsCollector@109] - zookeeper.pathStats.slotCapacity = 60 zookeeper_1 | 2022-04-21 13:24:25,025 - INFO [main:RequestPathMetricsCollector@110] - zookeeper.pathStats.slotDuration = 15 zookeeper_1 | 2022-04-21 13:24:25,025 - INFO [main:RequestPathMetricsCollector@111] - zookeeper.pathStats.maxDepth = 6 zookeeper_1 | 2022-04-21 13:24:25,025 - INFO [main:RequestPathMetricsCollector@112] - zookeeper.pathStats.initialDelay = 5 zookeeper_1 | 2022-04-21 13:24:25,025 - INFO [main:RequestPathMetricsCollector@113] - zookeeper.pathStats.delay = 5 zookeeper_1 | 2022-04-21 13:24:25,025 - INFO [main:RequestPathMetricsCollector@114] - zookeeper.pathStats.enabled = false zookeeper_1 | 2022-04-21 13:24:25,028 - INFO [main:ZooKeeperServer@1498] - The max bytes for all large requests are set to 104857600 zookeeper_1 | 2022-04-21 13:24:25,028 - INFO [main:ZooKeeperServer@1512] - The large request threshold is set to -1 zookeeper_1 | 2022-04-21 13:24:25,028 - INFO [main:ZooKeeperServer@339] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 clientPortListenBacklog -1 datadir /zookeeper/txns/version-2 snapdir /zookeeper/data/version-2 zookeeper_1 | 2022-04-21 13:24:25,068 - INFO [main:Log@169] - Logging initialized @679ms to org.eclipse.jetty.util.log.Slf4jLog connect_1 | --- Setting property from CONNECT_REST_ADVERTISED_PORT: rest.advertised.port=8083 zookeeper_1 | 2022-04-21 13:24:25,178 - WARN [main:ContextHandler@1660] - o.e.j.s.ServletContextHandler@a1153bc{/,null,STOPPED} contextPath ends with /* zookeeper_1 | 2022-04-21 13:24:25,179 - WARN [main:ContextHandler@1671] - Empty contextPath connect_1 | --- Setting property from CONNECT_OFFSET_STORAGE_TOPIC: offset.storage.topic=my_connect_offsets zookeeper_1 | 2022-04-21 13:24:25,214 - INFO [main:Server@375] - jetty-9.4.39.v20210325; built: 2021-03-25T14:42:11.471Z; git: 9fc7ca5a922f2a37b84ec9dbc26a5168cee7e667; jvm 11.0.14.1+1 connect_1 | --- Setting property from CONNECT_KEY_CONVERTER: key.converter=org.apache.kafka.connect.json.JsonConverter mysql_1 | 2022-04-21T13:24:25.226527Z 0 [System] [MY-010229] [Server] Starting XA crash recovery... connect_1 | --- Setting property from CONNECT_CONFIG_STORAGE_TOPIC: config.storage.topic=my_connect_configs mysql_1 | 2022-04-21T13:24:25.236509Z 0 [System] [MY-010232] [Server] XA crash recovery finished. connect_1 | --- Setting property from CONNECT_GROUP_ID: group.id=1 zookeeper_1 | 2022-04-21 13:24:25,259 - INFO [main:DefaultSessionIdManager@334] - DefaultSessionIdManager workerName=node0 zookeeper_1 | 2022-04-21 13:24:25,259 - INFO [main:DefaultSessionIdManager@339] - No SessionScavenger set, using defaults zookeeper_1 | 2022-04-21 13:24:25,261 - INFO [main:HouseKeeper@132] - node0 Scavenging every 660000ms connect_1 | --- Setting property from CONNECT_REST_ADVERTISED_HOST_NAME: rest.advertised.host.name=172.19.0.5 zookeeper_1 | 2022-04-21 13:24:25,277 - WARN [main:ConstraintSecurityHandler@759] - ServletContext@o.e.j.s.ServletContextHandler@a1153bc{/,null,STARTING} has uncovered http methods for path: /* zookeeper_1 | 2022-04-21 13:24:25,290 - INFO [main:ContextHandler@916] - Started o.e.j.s.ServletContextHandler@a1153bc{/,null,AVAILABLE} connect_1 | --- Setting property from CONNECT_REST_HOST_NAME: rest.host.name=172.19.0.5 zookeeper_1 | 2022-04-21 13:24:25,308 - INFO [main:AbstractConnector@331] - Started ServerConnector@795cd85e{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} zookeeper_1 | 2022-04-21 13:24:25,308 - INFO [main:Server@415] - Started @919ms zookeeper_1 | 2022-04-21 13:24:25,309 - INFO [main:JettyAdminServer@191] - Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands zookeeper_1 | 2022-04-21 13:24:25,313 - INFO [main:ServerCnxnFactory@169] - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory connect_1 | --- Setting property from CONNECT_VALUE_CONVERTER: value.converter=org.apache.kafka.connect.json.JsonConverter zookeeper_1 | 2022-04-21 13:24:25,314 - WARN [main:ServerCnxnFactory@309] - maxCnxns is not configured, using default value 0. zookeeper_1 | 2022-04-21 13:24:25,316 - INFO [main:NIOServerCnxnFactory@666] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 24 worker threads, and 64 kB direct buffers. zookeeper_1 | 2022-04-21 13:24:25,317 - INFO [main:NIOServerCnxnFactory@674] - binding to port 0.0.0.0/0.0.0.0:2181 mysql_1 | 2022-04-21T13:24:25.322259Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. mysql_1 | 2022-04-21T13:24:25.322308Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel. mysql_1 | 2022-04-21T13:24:25.324091Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory. connect_1 | --- Setting property from CONNECT_REST_PORT: rest.port=8083 mysql_1 | 2022-04-21T13:24:25.344984Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock mysql_1 | 2022-04-21T13:24:25.345050Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.28' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL. zookeeper_1 | 2022-04-21 13:24:25,344 - INFO [main:WatchManagerFactory@42] - Using org.apache.zookeeper.server.watch.WatchManager as watch manager zookeeper_1 | 2022-04-21 13:24:25,345 - INFO [main:WatchManagerFactory@42] - Using org.apache.zookeeper.server.watch.WatchManager as watch manager zookeeper_1 | 2022-04-21 13:24:25,347 - INFO [main:ZKDatabase@132] - zookeeper.snapshotSizeFactor = 0.33 zookeeper_1 | 2022-04-21 13:24:25,347 - INFO [main:ZKDatabase@152] - zookeeper.commitLogCount=500 zookeeper_1 | 2022-04-21 13:24:25,347 - INFO [main:FileSnap@85] - Reading snapshot /zookeeper/data/version-2/snapshot.e3 connect_1 | --- Setting property from CONNECT_STATUS_STORAGE_TOPIC: status.storage.topic=my_connect_statuses zookeeper_1 | 2022-04-21 13:24:25,360 - INFO [main:DataTree@1730] - The digest in the snapshot has digest version of 2, , with zxid as 0xe3, and digest value as 457437128025 connect_1 | --- Setting property from CONNECT_OFFSET_FLUSH_TIMEOUT_MS: offset.flush.timeout.ms=5000 zookeeper_1 | 2022-04-21 13:24:25,386 - INFO [main:ZKAuditProvider@42] - ZooKeeper audit is disabled. zookeeper_1 | 2022-04-21 13:24:25,387 - INFO [main:FileTxnSnapLog@363] - 37 txns loaded in 19 ms zookeeper_1 | 2022-04-21 13:24:25,388 - INFO [main:ZKDatabase@289] - Snapshot loaded in 40 ms, highest zxid is 0x108, digest is 483564208944 zookeeper_1 | 2022-04-21 13:24:25,388 - INFO [main:FileTxnSnapLog@470] - Snapshotting: 0x108 to /zookeeper/data/version-2/snapshot.108 connect_1 | --- Setting property from CONNECT_PLUGIN_PATH: plugin.path=/kafka/connect zookeeper_1 | 2022-04-21 13:24:25,397 - INFO [main:ZooKeeperServer@529] - Snapshot taken in 9 ms connect_1 | --- Setting property from CONNECT_OFFSET_FLUSH_INTERVAL_MS: offset.flush.interval.ms=60000 zookeeper_1 | 2022-04-21 13:24:25,410 - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@136] - PrepRequestProcessor (sid:0) started, reconfigEnabled=false zookeeper_1 | 2022-04-21 13:24:25,410 - INFO [main:RequestThrottler@74] - zookeeper.request_throttler.shutdownTimeout = 10000 connect_1 | --- Setting property from CONNECT_BOOTSTRAP_SERVERS: bootstrap.servers=kafka:9092 zookeeper_1 | 2022-04-21 13:24:25,425 - INFO [main:ContainerManager@83] - Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 connect_1 | --- Setting property from CONNECT_TASK_SHUTDOWN_GRACEFUL_TIMEOUT_MS: task.shutdown.graceful.timeout.ms=10000 kafka_1 | 2022-04-21 13:24:25,846 - INFO [main:Log4jControllerRegistration$@31] - Registered kafka:type=kafka.Log4jController MBean kafka_1 | 2022-04-21 13:24:26,194 - INFO [main:X509Util@77] - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation kafka_1 | 2022-04-21 13:24:26,298 - INFO [main:LoggingSignalHandler@72] - Registered signal handlers for TERM, INT, HUP kafka_1 | 2022-04-21 13:24:26,304 - INFO [main:Logging@66] - starting kafka_1 | 2022-04-21 13:24:26,305 - INFO [main:Logging@66] - Connecting to zookeeper on zookeeper:2181 kafka_1 | 2022-04-21 13:24:26,326 - INFO [main:Logging@66] - [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. kafka_1 | 2022-04-21 13:24:26,336 - INFO [main:Environment@98] - Client environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT kafka_1 | 2022-04-21 13:24:26,336 - INFO [main:Environment@98] - Client environment:host.name=9ecf8210a27b kafka_1 | 2022-04-21 13:24:26,336 - INFO [main:Environment@98] - Client environment:java.version=11.0.14.1 kafka_1 | 2022-04-21 13:24:26,337 - INFO [main:Environment@98] - Client environment:java.vendor=Red Hat, Inc. kafka_1 | 2022-04-21 13:24:26,337 - INFO [main:Environment@98] - Client environment:java.home=/usr/lib/jvm/java-11-openjdk-11.0.14.1.1-5.fc34.x86_64 kafka_1 | 2022-04-21 13:24:26,337 - INFO [main:Environment@98] - Client environment:java.class.path=/kafka/libs/activation-1.1.1.jar:/kafka/libs/aopalliance-repackaged-2.6.1.jar:/kafka/libs/argparse4j-0.7.0.jar:/kafka/libs/audience-annotations-0.5.0.jar:/kafka/libs/commons-cli-1.4.jar:/kafka/libs/commons-lang3-3.8.1.jar:/kafka/libs/connect-api-3.1.0.jar:/kafka/libs/connect-basic-auth-extension-3.1.0.jar:/kafka/libs/connect-file-3.1.0.jar:/kafka/libs/connect-json-3.1.0.jar:/kafka/libs/connect-mirror-3.1.0.jar:/kafka/libs/connect-mirror-client-3.1.0.jar:/kafka/libs/connect-runtime-3.1.0.jar:/kafka/libs/connect-transforms-3.1.0.jar:/kafka/libs/hk2-api-2.6.1.jar:/kafka/libs/hk2-locator-2.6.1.jar:/kafka/libs/hk2-utils-2.6.1.jar:/kafka/libs/jackson-annotations-2.12.3.jar:/kafka/libs/jackson-core-2.12.3.jar:/kafka/libs/jackson-databind-2.12.3.jar:/kafka/libs/jackson-dataformat-csv-2.12.3.jar:/kafka/libs/jackson-datatype-jdk8-2.12.3.jar:/kafka/libs/jackson-jaxrs-base-2.12.3.jar:/kafka/libs/jackson-jaxrs-json-provider-2.12.3.jar:/kafka/libs/jackson-module-jaxb-annotations-2.12.3.jar:/kafka/libs/jackson-module-scala_2.13-2.12.3.jar:/kafka/libs/jakarta.activation-api-1.2.1.jar:/kafka/libs/jakarta.annotation-api-1.3.5.jar:/kafka/libs/jakarta.inject-2.6.1.jar:/kafka/libs/jakarta.validation-api-2.0.2.jar:/kafka/libs/jakarta.ws.rs-api-2.1.6.jar:/kafka/libs/jakarta.xml.bind-api-2.3.2.jar:/kafka/libs/javassist-3.27.0-GA.jar:/kafka/libs/javax.servlet-api-3.1.0.jar:/kafka/libs/javax.ws.rs-api-2.1.1.jar:/kafka/libs/jaxb-api-2.3.0.jar:/kafka/libs/jersey-client-2.34.jar:/kafka/libs/jersey-common-2.34.jar:/kafka/libs/jersey-container-servlet-2.34.jar:/kafka/libs/jersey-container-servlet-core-2.34.jar:/kafka/libs/jersey-hk2-2.34.jar:/kafka/libs/jersey-server-2.34.jar:/kafka/libs/jetty-client-9.4.43.v20210629.jar:/kafka/libs/jetty-continuation-9.4.43.v20210629.jar:/kafka/libs/jetty-http-9.4.43.v20210629.jar:/kafka/libs/jetty-io-9.4.43.v20210629.jar:/kafka/libs/jetty-security-9.4.43.v20210629.jar:/kafka/libs/jetty-server-9.4.43.v20210629.jar:/kafka/libs/jetty-servlet-9.4.43.v20210629.jar:/kafka/libs/jetty-servlets-9.4.43.v20210629.jar:/kafka/libs/jetty-util-9.4.43.v20210629.jar:/kafka/libs/jetty-util-ajax-9.4.43.v20210629.jar:/kafka/libs/jline-3.12.1.jar:/kafka/libs/jopt-simple-5.0.4.jar:/kafka/libs/jose4j-0.7.8.jar:/kafka/libs/kafka-clients-3.1.0.jar:/kafka/libs/kafka-log4j-appender-3.1.0.jar:/kafka/libs/kafka-metadata-3.1.0.jar:/kafka/libs/kafka-raft-3.1.0.jar:/kafka/libs/kafka-server-common-3.1.0.jar:/kafka/libs/kafka-shell-3.1.0.jar:/kafka/libs/kafka-storage-3.1.0.jar:/kafka/libs/kafka-storage-api-3.1.0.jar:/kafka/libs/kafka-streams-3.1.0.jar:/kafka/libs/kafka-streams-examples-3.1.0.jar:/kafka/libs/kafka-streams-scala_2.13-3.1.0.jar:/kafka/libs/kafka-streams-test-utils-3.1.0.jar:/kafka/libs/kafka-tools-3.1.0.jar:/kafka/libs/kafka_2.13-3.1.0.jar:/kafka/libs/log4j-1.2.17.jar:/kafka/libs/lz4-java-1.8.0.jar:/kafka/libs/maven-artifact-3.8.1.jar:/kafka/libs/metrics-core-2.2.0.jar:/kafka/libs/metrics-core-4.1.12.1.jar:/kafka/libs/netty-buffer-4.1.68.Final.jar:/kafka/libs/netty-codec-4.1.68.Final.jar:/kafka/libs/netty-common-4.1.68.Final.jar:/kafka/libs/netty-handler-4.1.68.Final.jar:/kafka/libs/netty-resolver-4.1.68.Final.jar:/kafka/libs/netty-transport-4.1.68.Final.jar:/kafka/libs/netty-transport-native-epoll-4.1.68.Final.jar:/kafka/libs/netty-transport-native-unix-common-4.1.68.Final.jar:/kafka/libs/osgi-resource-locator-1.0.3.jar:/kafka/libs/paranamer-2.8.jar:/kafka/libs/plexus-utils-3.2.1.jar:/kafka/libs/reflections-0.9.12.jar:/kafka/libs/rocksdbjni-6.22.1.1.jar:/kafka/libs/scala-collection-compat_2.13-2.4.4.jar:/kafka/libs/scala-java8-compat_2.13-1.0.0.jar:/kafka/libs/scala-library-2.13.6.jar:/kafka/libs/scala-logging_2.13-3.9.3.jar:/kafka/libs/scala-reflect-2.13.6.jar:/kafka/libs/slf4j-api-1.7.30.jar:/kafka/libs/slf4j-log4j12-1.7.30.jar:/kafka/libs/snappy-java-1.1.8.4.jar:/kafka/libs/trogdor-3.1.0.jar:/kafka/libs/zookeeper-3.6.3.jar:/kafka/libs/zookeeper-jute-3.6.3.jar:/kafka/libs/zstd-jni-1.5.0-4.jar kafka_1 | 2022-04-21 13:24:26,337 - INFO [main:Environment@98] - Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib kafka_1 | 2022-04-21 13:24:26,337 - INFO [main:Environment@98] - Client environment:java.io.tmpdir=/tmp kafka_1 | 2022-04-21 13:24:26,338 - INFO [main:Environment@98] - Client environment:java.compiler= kafka_1 | 2022-04-21 13:24:26,338 - INFO [main:Environment@98] - Client environment:os.name=Linux kafka_1 | 2022-04-21 13:24:26,339 - INFO [main:Environment@98] - Client environment:os.arch=amd64 kafka_1 | 2022-04-21 13:24:26,340 - INFO [main:Environment@98] - Client environment:os.version=5.8.0-59-generic kafka_1 | 2022-04-21 13:24:26,340 - INFO [main:Environment@98] - Client environment:user.name=kafka kafka_1 | 2022-04-21 13:24:26,340 - INFO [main:Environment@98] - Client environment:user.home=/kafka kafka_1 | 2022-04-21 13:24:26,340 - INFO [main:Environment@98] - Client environment:user.dir=/kafka kafka_1 | 2022-04-21 13:24:26,340 - INFO [main:Environment@98] - Client environment:os.memory.free=973MB kafka_1 | 2022-04-21 13:24:26,340 - INFO [main:Environment@98] - Client environment:os.memory.max=1024MB kafka_1 | 2022-04-21 13:24:26,340 - INFO [main:Environment@98] - Client environment:os.memory.total=1024MB kafka_1 | 2022-04-21 13:24:26,344 - INFO [main:ZooKeeper@1006] - Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@1863d2fe kafka_1 | 2022-04-21 13:24:26,350 - INFO [main:ClientCnxnSocket@239] - jute.maxbuffer value is 4194304 Bytes kafka_1 | 2022-04-21 13:24:26,357 - INFO [main:ClientCnxn@1736] - zookeeper.request.timeout value is 0. feature enabled=false connect_1 | 2022-04-21 13:24:26,360 INFO || WorkerInfo values: connect_1 | jvm.args = -Xms256M, -Xmx2G, -XX:+UseG1GC, -XX:MaxGCPauseMillis=20, -XX:InitiatingHeapOccupancyPercent=35, -XX:+ExplicitGCInvokesConcurrent, -XX:MaxInlineLevel=15, -Djava.awt.headless=true, -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Dkafka.logs.dir=/kafka/logs, -Dlog4j.configuration=file:/kafka/config/log4j.properties connect_1 | jvm.spec = Red Hat, Inc., OpenJDK 64-Bit Server VM, 11.0.14.1, 11.0.14.1+1 connect_1 | jvm.classpath = /kafka/libs/activation-1.1.1.jar:/kafka/libs/aopalliance-repackaged-2.6.1.jar:/kafka/libs/argparse4j-0.7.0.jar:/kafka/libs/audience-annotations-0.5.0.jar:/kafka/libs/avro-1.10.1.jar:/kafka/libs/common-config-7.0.1.jar:/kafka/libs/common-utils-7.0.1.jar:/kafka/libs/commons-cli-1.4.jar:/kafka/libs/commons-lang3-3.8.1.jar:/kafka/libs/connect-api-3.1.0.jar:/kafka/libs/connect-basic-auth-extension-3.1.0.jar:/kafka/libs/connect-file-3.1.0.jar:/kafka/libs/connect-json-3.1.0.jar:/kafka/libs/connect-mirror-3.1.0.jar:/kafka/libs/connect-mirror-client-3.1.0.jar:/kafka/libs/connect-runtime-3.1.0.jar:/kafka/libs/connect-transforms-3.1.0.jar:/kafka/libs/guava-31.0.1-jre.jar:/kafka/libs/hk2-api-2.6.1.jar:/kafka/libs/hk2-locator-2.6.1.jar:/kafka/libs/hk2-utils-2.6.1.jar:/kafka/libs/jackson-annotations-2.12.3.jar:/kafka/libs/jackson-core-2.12.3.jar:/kafka/libs/jackson-databind-2.12.3.jar:/kafka/libs/jackson-dataformat-csv-2.12.3.jar:/kafka/libs/jackson-datatype-jdk8-2.12.3.jar:/kafka/libs/jackson-jaxrs-base-2.12.3.jar:/kafka/libs/jackson-jaxrs-json-provider-2.12.3.jar:/kafka/libs/jackson-module-jaxb-annotations-2.12.3.jar:/kafka/libs/jackson-module-scala_2.13-2.12.3.jar:/kafka/libs/jakarta.activation-api-1.2.1.jar:/kafka/libs/jakarta.annotation-api-1.3.5.jar:/kafka/libs/jakarta.inject-2.6.1.jar:/kafka/libs/jakarta.validation-api-2.0.2.jar:/kafka/libs/jakarta.ws.rs-api-2.1.6.jar:/kafka/libs/jakarta.xml.bind-api-2.3.2.jar:/kafka/libs/javassist-3.27.0-GA.jar:/kafka/libs/javax.servlet-api-3.1.0.jar:/kafka/libs/javax.ws.rs-api-2.1.1.jar:/kafka/libs/jaxb-api-2.3.0.jar:/kafka/libs/jersey-client-2.34.jar:/kafka/libs/jersey-common-2.34.jar:/kafka/libs/jersey-container-servlet-2.34.jar:/kafka/libs/jersey-container-servlet-core-2.34.jar:/kafka/libs/jersey-hk2-2.34.jar:/kafka/libs/jersey-server-2.34.jar:/kafka/libs/jetty-client-9.4.43.v20210629.jar:/kafka/libs/jetty-continuation-9.4.43.v20210629.jar:/kafka/libs/jetty-http-9.4.43.v20210629.jar:/kafka/libs/jetty-io-9.4.43.v20210629.jar:/kafka/libs/jetty-security-9.4.43.v20210629.jar:/kafka/libs/jetty-server-9.4.43.v20210629.jar:/kafka/libs/jetty-servlet-9.4.43.v20210629.jar:/kafka/libs/jetty-servlets-9.4.43.v20210629.jar:/kafka/libs/jetty-util-9.4.43.v20210629.jar:/kafka/libs/jetty-util-ajax-9.4.43.v20210629.jar:/kafka/libs/jline-3.12.1.jar:/kafka/libs/jopt-simple-5.0.4.jar:/kafka/libs/jose4j-0.7.8.jar:/kafka/libs/kafka-avro-serializer-7.0.1.jar:/kafka/libs/kafka-clients-3.1.0.jar:/kafka/libs/kafka-connect-avro-converter-7.0.1.jar:/kafka/libs/kafka-connect-avro-data-7.0.1.jar:/kafka/libs/kafka-log4j-appender-3.1.0.jar:/kafka/libs/kafka-metadata-3.1.0.jar:/kafka/libs/kafka-raft-3.1.0.jar:/kafka/libs/kafka-schema-registry-client-7.0.1.jar:/kafka/libs/kafka-schema-serializer-7.0.1.jar:/kafka/libs/kafka-server-common-3.1.0.jar:/kafka/libs/kafka-shell-3.1.0.jar:/kafka/libs/kafka-storage-3.1.0.jar:/kafka/libs/kafka-storage-api-3.1.0.jar:/kafka/libs/kafka-streams-3.1.0.jar:/kafka/libs/kafka-streams-examples-3.1.0.jar:/kafka/libs/kafka-streams-scala_2.13-3.1.0.jar:/kafka/libs/kafka-streams-test-utils-3.1.0.jar:/kafka/libs/kafka-tools-3.1.0.jar:/kafka/libs/kafka_2.13-3.1.0.jar:/kafka/libs/log4j-1.2.17.jar:/kafka/libs/lz4-java-1.8.0.jar:/kafka/libs/maven-artifact-3.8.1.jar:/kafka/libs/metrics-core-2.2.0.jar:/kafka/libs/metrics-core-4.1.12.1.jar:/kafka/libs/netty-buffer-4.1.68.Final.jar:/kafka/libs/netty-codec-4.1.68.Final.jar:/kafka/libs/netty-common-4.1.68.Final.jar:/kafka/libs/netty-handler-4.1.68.Final.jar:/kafka/libs/netty-resolver-4.1.68.Final.jar:/kafka/libs/netty-transport-4.1.68.Final.jar:/kafka/libs/netty-transport-native-epoll-4.1.68.Final.jar:/kafka/libs/netty-transport-native-unix-common-4.1.68.Final.jar:/kafka/libs/osgi-resource-locator-1.0.3.jar:/kafka/libs/paranamer-2.8.jar:/kafka/libs/plexus-utils-3.2.1.jar:/kafka/libs/reflections-0.9.12.jar:/kafka/libs/rocksdbjni-6.22.1.1.jar:/kafka/libs/scala-collection-compat_2.13-2.4.4.jar:/kafka/libs/scala-java8-compat_2.13-1.0.0.jar:/kafka/libs/scala-library-2.13.6.jar:/kafka/libs/scala-logging_2.13-3.9.3.jar:/kafka/libs/scala-reflect-2.13.6.jar:/kafka/libs/slf4j-api-1.7.30.jar:/kafka/libs/slf4j-log4j12-1.7.30.jar:/kafka/libs/snappy-java-1.1.8.4.jar:/kafka/libs/trogdor-3.1.0.jar:/kafka/libs/zookeeper-3.6.3.jar:/kafka/libs/zookeeper-jute-3.6.3.jar:/kafka/libs/zstd-jni-1.5.0-4.jar connect_1 | os.spec = Linux, amd64, 5.8.0-59-generic connect_1 | os.vcpus = 12 connect_1 | [org.apache.kafka.connect.runtime.WorkerInfo] connect_1 | 2022-04-21 13:24:26,366 INFO || Scanning for plugin classes. This might take a moment ... [org.apache.kafka.connect.cli.ConnectDistributed] kafka_1 | 2022-04-21 13:24:26,373 - INFO [main:Logging@66] - [ZooKeeperClient Kafka server] Waiting until connected. kafka_1 | 2022-04-21 13:24:26,388 - INFO [main-SendThread(zookeeper:2181):ClientCnxn$SendThread@1181] - Opening socket connection to server zookeeper/172.19.0.2:2181. kafka_1 | 2022-04-21 13:24:26,389 - INFO [main-SendThread(zookeeper:2181):ClientCnxn$SendThread@1183] - SASL config status: Will not attempt to authenticate using SASL (unknown error) connect_1 | 2022-04-21 13:24:26,389 INFO || Loading plugin from: /kafka/connect/debezium-connector-mysql [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] kafka_1 | 2022-04-21 13:24:26,398 - INFO [main-SendThread(zookeeper:2181):ClientCnxn$SendThread@1013] - Socket connection established, initiating session, client: /172.19.0.4:42856, server: zookeeper/172.19.0.2:2181 zookeeper_1 | 2022-04-21 13:24:26,412 - INFO [SyncThread:0:FileTxnLog@284] - Creating new log file: log.109 kafka_1 | 2022-04-21 13:24:26,424 - INFO [main-SendThread(zookeeper:2181):ClientCnxn$SendThread@1448] - Session establishment complete on server zookeeper/172.19.0.2:2181, session id = 0x10001da64130000, negotiated timeout = 18000 kafka_1 | 2022-04-21 13:24:26,431 - INFO [main:Logging@66] - [ZooKeeperClient Kafka server] Connected. kafka_1 | 2022-04-21 13:24:26,620 - INFO [feature-zk-node-event-process-thread:Logging@66] - [feature-zk-node-event-process-thread]: Starting kafka_1 | 2022-04-21 13:24:26,847 - INFO [feature-zk-node-event-process-thread:Logging@66] - Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Features{}, epoch=0). kafka_1 | 2022-04-21 13:24:26,857 - INFO [main:Logging@66] - Cluster ID = _Fc9d2urQwKYMwlE0QT5Dw kafka_1 | 2022-04-21 13:24:26,934 - INFO [main:AbstractConfig@376] - KafkaConfig values: kafka_1 | advertised.listeners = PLAINTEXT://172.19.0.4:9092 kafka_1 | alter.config.policy.class.name = null kafka_1 | alter.log.dirs.replication.quota.window.num = 11 kafka_1 | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka_1 | authorizer.class.name = kafka_1 | auto.create.topics.enable = true kafka_1 | auto.leader.rebalance.enable = true kafka_1 | background.threads = 10 kafka_1 | broker.heartbeat.interval.ms = 2000 kafka_1 | broker.id = 1 kafka_1 | broker.id.generation.enable = true kafka_1 | broker.rack = null kafka_1 | broker.session.timeout.ms = 9000 kafka_1 | client.quota.callback.class = null kafka_1 | compression.type = producer kafka_1 | connection.failed.authentication.delay.ms = 100 kafka_1 | connections.max.idle.ms = 600000 kafka_1 | connections.max.reauth.ms = 0 kafka_1 | control.plane.listener.name = null kafka_1 | controlled.shutdown.enable = true kafka_1 | controlled.shutdown.max.retries = 3 kafka_1 | controlled.shutdown.retry.backoff.ms = 5000 kafka_1 | controller.listener.names = null kafka_1 | controller.quorum.append.linger.ms = 25 kafka_1 | controller.quorum.election.backoff.max.ms = 1000 kafka_1 | controller.quorum.election.timeout.ms = 1000 kafka_1 | controller.quorum.fetch.timeout.ms = 2000 kafka_1 | controller.quorum.request.timeout.ms = 2000 kafka_1 | controller.quorum.retry.backoff.ms = 20 kafka_1 | controller.quorum.voters = [] kafka_1 | controller.quota.window.num = 11 kafka_1 | controller.quota.window.size.seconds = 1 kafka_1 | controller.socket.timeout.ms = 30000 kafka_1 | create.topic.policy.class.name = null kafka_1 | default.replication.factor = 1 kafka_1 | delegation.token.expiry.check.interval.ms = 3600000 kafka_1 | delegation.token.expiry.time.ms = 86400000 kafka_1 | delegation.token.master.key = null kafka_1 | delegation.token.max.lifetime.ms = 604800000 kafka_1 | delegation.token.secret.key = null kafka_1 | delete.records.purgatory.purge.interval.requests = 1 kafka_1 | delete.topic.enable = true kafka_1 | fetch.max.bytes = 57671680 kafka_1 | fetch.purgatory.purge.interval.requests = 1000 kafka_1 | group.initial.rebalance.delay.ms = 0 kafka_1 | group.max.session.timeout.ms = 1800000 kafka_1 | group.max.size = 2147483647 kafka_1 | group.min.session.timeout.ms = 6000 kafka_1 | initial.broker.registration.timeout.ms = 60000 kafka_1 | inter.broker.listener.name = null kafka_1 | inter.broker.protocol.version = 3.1-IV0 kafka_1 | kafka.metrics.polling.interval.secs = 10 kafka_1 | kafka.metrics.reporters = [] kafka_1 | leader.imbalance.check.interval.seconds = 300 kafka_1 | leader.imbalance.per.broker.percentage = 10 kafka_1 | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL kafka_1 | listeners = PLAINTEXT://172.19.0.4:9092 kafka_1 | log.cleaner.backoff.ms = 15000 kafka_1 | log.cleaner.dedupe.buffer.size = 134217728 kafka_1 | log.cleaner.delete.retention.ms = 86400000 kafka_1 | log.cleaner.enable = true kafka_1 | log.cleaner.io.buffer.load.factor = 0.9 kafka_1 | log.cleaner.io.buffer.size = 524288 kafka_1 | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka_1 | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka_1 | log.cleaner.min.cleanable.ratio = 0.5 kafka_1 | log.cleaner.min.compaction.lag.ms = 0 kafka_1 | log.cleaner.threads = 1 kafka_1 | log.cleanup.policy = [delete] kafka_1 | log.dir = /tmp/kafka-logs kafka_1 | log.dirs = /kafka/data/1 kafka_1 | log.flush.interval.messages = 9223372036854775807 kafka_1 | log.flush.interval.ms = null kafka_1 | log.flush.offset.checkpoint.interval.ms = 60000 kafka_1 | log.flush.scheduler.interval.ms = 9223372036854775807 kafka_1 | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka_1 | log.index.interval.bytes = 4096 kafka_1 | log.index.size.max.bytes = 10485760 kafka_1 | log.message.downconversion.enable = true kafka_1 | log.message.format.version = 3.0-IV1 kafka_1 | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka_1 | log.message.timestamp.type = CreateTime kafka_1 | log.preallocate = false kafka_1 | log.retention.bytes = -1 kafka_1 | log.retention.check.interval.ms = 300000 kafka_1 | log.retention.hours = 168 kafka_1 | log.retention.minutes = null kafka_1 | log.retention.ms = null kafka_1 | log.roll.hours = 168 kafka_1 | log.roll.jitter.hours = 0 kafka_1 | log.roll.jitter.ms = null kafka_1 | log.roll.ms = null kafka_1 | log.segment.bytes = 1073741824 kafka_1 | log.segment.delete.delay.ms = 60000 kafka_1 | max.connection.creation.rate = 2147483647 kafka_1 | max.connections = 2147483647 kafka_1 | max.connections.per.ip = 2147483647 kafka_1 | max.connections.per.ip.overrides = kafka_1 | max.incremental.fetch.session.cache.slots = 1000 kafka_1 | message.max.bytes = 1048588 kafka_1 | metadata.log.dir = null kafka_1 | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka_1 | metadata.log.segment.bytes = 1073741824 kafka_1 | metadata.log.segment.min.bytes = 8388608 kafka_1 | metadata.log.segment.ms = 604800000 kafka_1 | metadata.max.retention.bytes = -1 kafka_1 | metadata.max.retention.ms = 604800000 kafka_1 | metric.reporters = [] kafka_1 | metrics.num.samples = 2 kafka_1 | metrics.recording.level = INFO kafka_1 | metrics.sample.window.ms = 30000 kafka_1 | min.insync.replicas = 1 kafka_1 | node.id = 1 kafka_1 | num.io.threads = 8 kafka_1 | num.network.threads = 3 kafka_1 | num.partitions = 1 kafka_1 | num.recovery.threads.per.data.dir = 1 kafka_1 | num.replica.alter.log.dirs.threads = null kafka_1 | num.replica.fetchers = 1 kafka_1 | offset.metadata.max.bytes = 4096 kafka_1 | offsets.commit.required.acks = -1 kafka_1 | offsets.commit.timeout.ms = 5000 kafka_1 | offsets.load.buffer.size = 5242880 kafka_1 | offsets.retention.check.interval.ms = 600000 kafka_1 | offsets.retention.minutes = 10080 kafka_1 | offsets.topic.compression.codec = 0 kafka_1 | offsets.topic.num.partitions = 50 kafka_1 | offsets.topic.replication.factor = 1 kafka_1 | offsets.topic.segment.bytes = 104857600 kafka_1 | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka_1 | password.encoder.iterations = 4096 kafka_1 | password.encoder.key.length = 128 kafka_1 | password.encoder.keyfactory.algorithm = null kafka_1 | password.encoder.old.secret = null kafka_1 | password.encoder.secret = null kafka_1 | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka_1 | process.roles = [] kafka_1 | producer.purgatory.purge.interval.requests = 1000 kafka_1 | queued.max.request.bytes = -1 kafka_1 | queued.max.requests = 500 kafka_1 | quota.window.num = 11 kafka_1 | quota.window.size.seconds = 1 kafka_1 | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka_1 | remote.log.manager.task.interval.ms = 30000 kafka_1 | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka_1 | remote.log.manager.task.retry.backoff.ms = 500 kafka_1 | remote.log.manager.task.retry.jitter = 0.2 kafka_1 | remote.log.manager.thread.pool.size = 10 kafka_1 | remote.log.metadata.manager.class.name = null kafka_1 | remote.log.metadata.manager.class.path = null kafka_1 | remote.log.metadata.manager.impl.prefix = null kafka_1 | remote.log.metadata.manager.listener.name = null kafka_1 | remote.log.reader.max.pending.tasks = 100 kafka_1 | remote.log.reader.threads = 10 kafka_1 | remote.log.storage.manager.class.name = null kafka_1 | remote.log.storage.manager.class.path = null kafka_1 | remote.log.storage.manager.impl.prefix = null kafka_1 | remote.log.storage.system.enable = false kafka_1 | replica.fetch.backoff.ms = 1000 kafka_1 | replica.fetch.max.bytes = 1048576 kafka_1 | replica.fetch.min.bytes = 1 kafka_1 | replica.fetch.response.max.bytes = 10485760 kafka_1 | replica.fetch.wait.max.ms = 500 kafka_1 | replica.high.watermark.checkpoint.interval.ms = 5000 kafka_1 | replica.lag.time.max.ms = 30000 kafka_1 | replica.selector.class = null kafka_1 | replica.socket.receive.buffer.bytes = 65536 kafka_1 | replica.socket.timeout.ms = 30000 kafka_1 | replication.quota.window.num = 11 kafka_1 | replication.quota.window.size.seconds = 1 kafka_1 | request.timeout.ms = 30000 kafka_1 | reserved.broker.max.id = 1000 kafka_1 | sasl.client.callback.handler.class = null kafka_1 | sasl.enabled.mechanisms = [GSSAPI] kafka_1 | sasl.jaas.config = null kafka_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka_1 | sasl.kerberos.min.time.before.relogin = 60000 kafka_1 | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka_1 | sasl.kerberos.service.name = null kafka_1 | sasl.kerberos.ticket.renew.jitter = 0.05 kafka_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka_1 | sasl.login.callback.handler.class = null kafka_1 | sasl.login.class = null kafka_1 | sasl.login.connect.timeout.ms = null kafka_1 | sasl.login.read.timeout.ms = null kafka_1 | sasl.login.refresh.buffer.seconds = 300 kafka_1 | sasl.login.refresh.min.period.seconds = 60 kafka_1 | sasl.login.refresh.window.factor = 0.8 kafka_1 | sasl.login.refresh.window.jitter = 0.05 kafka_1 | sasl.login.retry.backoff.max.ms = 10000 kafka_1 | sasl.login.retry.backoff.ms = 100 kafka_1 | sasl.mechanism.controller.protocol = GSSAPI kafka_1 | sasl.mechanism.inter.broker.protocol = GSSAPI kafka_1 | sasl.oauthbearer.clock.skew.seconds = 30 kafka_1 | sasl.oauthbearer.expected.audience = null kafka_1 | sasl.oauthbearer.expected.issuer = null kafka_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka_1 | sasl.oauthbearer.jwks.endpoint.url = null kafka_1 | sasl.oauthbearer.scope.claim.name = scope kafka_1 | sasl.oauthbearer.sub.claim.name = sub kafka_1 | sasl.oauthbearer.token.endpoint.url = null kafka_1 | sasl.server.callback.handler.class = null kafka_1 | security.inter.broker.protocol = PLAINTEXT kafka_1 | security.providers = null kafka_1 | socket.connection.setup.timeout.max.ms = 30000 kafka_1 | socket.connection.setup.timeout.ms = 10000 kafka_1 | socket.receive.buffer.bytes = 102400 kafka_1 | socket.request.max.bytes = 104857600 kafka_1 | socket.send.buffer.bytes = 102400 kafka_1 | ssl.cipher.suites = [] kafka_1 | ssl.client.auth = none kafka_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka_1 | ssl.endpoint.identification.algorithm = https kafka_1 | ssl.engine.factory.class = null kafka_1 | ssl.key.password = null kafka_1 | ssl.keymanager.algorithm = SunX509 kafka_1 | ssl.keystore.certificate.chain = null kafka_1 | ssl.keystore.key = null kafka_1 | ssl.keystore.location = null kafka_1 | ssl.keystore.password = null kafka_1 | ssl.keystore.type = JKS kafka_1 | ssl.principal.mapping.rules = DEFAULT kafka_1 | ssl.protocol = TLSv1.3 kafka_1 | ssl.provider = null kafka_1 | ssl.secure.random.implementation = null kafka_1 | ssl.trustmanager.algorithm = PKIX kafka_1 | ssl.truststore.certificates = null kafka_1 | ssl.truststore.location = null kafka_1 | ssl.truststore.password = null kafka_1 | ssl.truststore.type = JKS kafka_1 | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka_1 | transaction.max.timeout.ms = 900000 kafka_1 | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka_1 | transaction.state.log.load.buffer.size = 5242880 kafka_1 | transaction.state.log.min.isr = 1 kafka_1 | transaction.state.log.num.partitions = 50 kafka_1 | transaction.state.log.replication.factor = 1 kafka_1 | transaction.state.log.segment.bytes = 104857600 kafka_1 | transactional.id.expiration.ms = 604800000 kafka_1 | unclean.leader.election.enable = false kafka_1 | zookeeper.clientCnxnSocket = null kafka_1 | zookeeper.connect = zookeeper:2181 kafka_1 | zookeeper.connection.timeout.ms = 18000 kafka_1 | zookeeper.max.in.flight.requests = 10 kafka_1 | zookeeper.session.timeout.ms = 18000 kafka_1 | zookeeper.set.acl = false kafka_1 | zookeeper.ssl.cipher.suites = null kafka_1 | zookeeper.ssl.client.enable = false kafka_1 | zookeeper.ssl.crl.enable = false kafka_1 | zookeeper.ssl.enabled.protocols = null kafka_1 | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka_1 | zookeeper.ssl.keystore.location = null kafka_1 | zookeeper.ssl.keystore.password = null kafka_1 | zookeeper.ssl.keystore.type = null kafka_1 | zookeeper.ssl.ocsp.enable = false kafka_1 | zookeeper.ssl.protocol = TLSv1.2 kafka_1 | zookeeper.ssl.truststore.location = null kafka_1 | zookeeper.ssl.truststore.password = null kafka_1 | zookeeper.ssl.truststore.type = null kafka_1 | zookeeper.sync.time.ms = 2000 kafka_1 | kafka_1 | 2022-04-21 13:24:26,949 - INFO [main:AbstractConfig@376] - KafkaConfig values: kafka_1 | advertised.listeners = PLAINTEXT://172.19.0.4:9092 kafka_1 | alter.config.policy.class.name = null kafka_1 | alter.log.dirs.replication.quota.window.num = 11 kafka_1 | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka_1 | authorizer.class.name = kafka_1 | auto.create.topics.enable = true kafka_1 | auto.leader.rebalance.enable = true kafka_1 | background.threads = 10 kafka_1 | broker.heartbeat.interval.ms = 2000 kafka_1 | broker.id = 1 kafka_1 | broker.id.generation.enable = true kafka_1 | broker.rack = null kafka_1 | broker.session.timeout.ms = 9000 kafka_1 | client.quota.callback.class = null kafka_1 | compression.type = producer kafka_1 | connection.failed.authentication.delay.ms = 100 kafka_1 | connections.max.idle.ms = 600000 kafka_1 | connections.max.reauth.ms = 0 kafka_1 | control.plane.listener.name = null kafka_1 | controlled.shutdown.enable = true kafka_1 | controlled.shutdown.max.retries = 3 kafka_1 | controlled.shutdown.retry.backoff.ms = 5000 kafka_1 | controller.listener.names = null kafka_1 | controller.quorum.append.linger.ms = 25 kafka_1 | controller.quorum.election.backoff.max.ms = 1000 kafka_1 | controller.quorum.election.timeout.ms = 1000 kafka_1 | controller.quorum.fetch.timeout.ms = 2000 kafka_1 | controller.quorum.request.timeout.ms = 2000 kafka_1 | controller.quorum.retry.backoff.ms = 20 kafka_1 | controller.quorum.voters = [] kafka_1 | controller.quota.window.num = 11 kafka_1 | controller.quota.window.size.seconds = 1 kafka_1 | controller.socket.timeout.ms = 30000 kafka_1 | create.topic.policy.class.name = null kafka_1 | default.replication.factor = 1 kafka_1 | delegation.token.expiry.check.interval.ms = 3600000 kafka_1 | delegation.token.expiry.time.ms = 86400000 kafka_1 | delegation.token.master.key = null kafka_1 | delegation.token.max.lifetime.ms = 604800000 kafka_1 | delegation.token.secret.key = null kafka_1 | delete.records.purgatory.purge.interval.requests = 1 kafka_1 | delete.topic.enable = true kafka_1 | fetch.max.bytes = 57671680 kafka_1 | fetch.purgatory.purge.interval.requests = 1000 kafka_1 | group.initial.rebalance.delay.ms = 0 kafka_1 | group.max.session.timeout.ms = 1800000 kafka_1 | group.max.size = 2147483647 kafka_1 | group.min.session.timeout.ms = 6000 kafka_1 | initial.broker.registration.timeout.ms = 60000 kafka_1 | inter.broker.listener.name = null kafka_1 | inter.broker.protocol.version = 3.1-IV0 kafka_1 | kafka.metrics.polling.interval.secs = 10 kafka_1 | kafka.metrics.reporters = [] kafka_1 | leader.imbalance.check.interval.seconds = 300 kafka_1 | leader.imbalance.per.broker.percentage = 10 kafka_1 | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL kafka_1 | listeners = PLAINTEXT://172.19.0.4:9092 kafka_1 | log.cleaner.backoff.ms = 15000 kafka_1 | log.cleaner.dedupe.buffer.size = 134217728 kafka_1 | log.cleaner.delete.retention.ms = 86400000 kafka_1 | log.cleaner.enable = true kafka_1 | log.cleaner.io.buffer.load.factor = 0.9 kafka_1 | log.cleaner.io.buffer.size = 524288 kafka_1 | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka_1 | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka_1 | log.cleaner.min.cleanable.ratio = 0.5 kafka_1 | log.cleaner.min.compaction.lag.ms = 0 kafka_1 | log.cleaner.threads = 1 kafka_1 | log.cleanup.policy = [delete] kafka_1 | log.dir = /tmp/kafka-logs kafka_1 | log.dirs = /kafka/data/1 kafka_1 | log.flush.interval.messages = 9223372036854775807 kafka_1 | log.flush.interval.ms = null kafka_1 | log.flush.offset.checkpoint.interval.ms = 60000 kafka_1 | log.flush.scheduler.interval.ms = 9223372036854775807 kafka_1 | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka_1 | log.index.interval.bytes = 4096 kafka_1 | log.index.size.max.bytes = 10485760 kafka_1 | log.message.downconversion.enable = true kafka_1 | log.message.format.version = 3.0-IV1 kafka_1 | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka_1 | log.message.timestamp.type = CreateTime kafka_1 | log.preallocate = false kafka_1 | log.retention.bytes = -1 kafka_1 | log.retention.check.interval.ms = 300000 kafka_1 | log.retention.hours = 168 kafka_1 | log.retention.minutes = null kafka_1 | log.retention.ms = null kafka_1 | log.roll.hours = 168 kafka_1 | log.roll.jitter.hours = 0 kafka_1 | log.roll.jitter.ms = null kafka_1 | log.roll.ms = null kafka_1 | log.segment.bytes = 1073741824 kafka_1 | log.segment.delete.delay.ms = 60000 kafka_1 | max.connection.creation.rate = 2147483647 kafka_1 | max.connections = 2147483647 kafka_1 | max.connections.per.ip = 2147483647 kafka_1 | max.connections.per.ip.overrides = kafka_1 | max.incremental.fetch.session.cache.slots = 1000 kafka_1 | message.max.bytes = 1048588 kafka_1 | metadata.log.dir = null kafka_1 | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka_1 | metadata.log.segment.bytes = 1073741824 kafka_1 | metadata.log.segment.min.bytes = 8388608 kafka_1 | metadata.log.segment.ms = 604800000 kafka_1 | metadata.max.retention.bytes = -1 kafka_1 | metadata.max.retention.ms = 604800000 kafka_1 | metric.reporters = [] kafka_1 | metrics.num.samples = 2 kafka_1 | metrics.recording.level = INFO kafka_1 | metrics.sample.window.ms = 30000 kafka_1 | min.insync.replicas = 1 kafka_1 | node.id = 1 kafka_1 | num.io.threads = 8 kafka_1 | num.network.threads = 3 kafka_1 | num.partitions = 1 kafka_1 | num.recovery.threads.per.data.dir = 1 kafka_1 | num.replica.alter.log.dirs.threads = null kafka_1 | num.replica.fetchers = 1 kafka_1 | offset.metadata.max.bytes = 4096 kafka_1 | offsets.commit.required.acks = -1 kafka_1 | offsets.commit.timeout.ms = 5000 kafka_1 | offsets.load.buffer.size = 5242880 kafka_1 | offsets.retention.check.interval.ms = 600000 kafka_1 | offsets.retention.minutes = 10080 kafka_1 | offsets.topic.compression.codec = 0 kafka_1 | offsets.topic.num.partitions = 50 kafka_1 | offsets.topic.replication.factor = 1 kafka_1 | offsets.topic.segment.bytes = 104857600 kafka_1 | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka_1 | password.encoder.iterations = 4096 kafka_1 | password.encoder.key.length = 128 kafka_1 | password.encoder.keyfactory.algorithm = null kafka_1 | password.encoder.old.secret = null kafka_1 | password.encoder.secret = null kafka_1 | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka_1 | process.roles = [] kafka_1 | producer.purgatory.purge.interval.requests = 1000 kafka_1 | queued.max.request.bytes = -1 kafka_1 | queued.max.requests = 500 kafka_1 | quota.window.num = 11 kafka_1 | quota.window.size.seconds = 1 kafka_1 | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka_1 | remote.log.manager.task.interval.ms = 30000 kafka_1 | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka_1 | remote.log.manager.task.retry.backoff.ms = 500 kafka_1 | remote.log.manager.task.retry.jitter = 0.2 kafka_1 | remote.log.manager.thread.pool.size = 10 kafka_1 | remote.log.metadata.manager.class.name = null kafka_1 | remote.log.metadata.manager.class.path = null kafka_1 | remote.log.metadata.manager.impl.prefix = null kafka_1 | remote.log.metadata.manager.listener.name = null kafka_1 | remote.log.reader.max.pending.tasks = 100 kafka_1 | remote.log.reader.threads = 10 kafka_1 | remote.log.storage.manager.class.name = null kafka_1 | remote.log.storage.manager.class.path = null kafka_1 | remote.log.storage.manager.impl.prefix = null kafka_1 | remote.log.storage.system.enable = false kafka_1 | replica.fetch.backoff.ms = 1000 kafka_1 | replica.fetch.max.bytes = 1048576 kafka_1 | replica.fetch.min.bytes = 1 kafka_1 | replica.fetch.response.max.bytes = 10485760 kafka_1 | replica.fetch.wait.max.ms = 500 kafka_1 | replica.high.watermark.checkpoint.interval.ms = 5000 kafka_1 | replica.lag.time.max.ms = 30000 kafka_1 | replica.selector.class = null kafka_1 | replica.socket.receive.buffer.bytes = 65536 kafka_1 | replica.socket.timeout.ms = 30000 kafka_1 | replication.quota.window.num = 11 kafka_1 | replication.quota.window.size.seconds = 1 kafka_1 | request.timeout.ms = 30000 kafka_1 | reserved.broker.max.id = 1000 kafka_1 | sasl.client.callback.handler.class = null kafka_1 | sasl.enabled.mechanisms = [GSSAPI] kafka_1 | sasl.jaas.config = null kafka_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka_1 | sasl.kerberos.min.time.before.relogin = 60000 kafka_1 | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka_1 | sasl.kerberos.service.name = null kafka_1 | sasl.kerberos.ticket.renew.jitter = 0.05 kafka_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka_1 | sasl.login.callback.handler.class = null kafka_1 | sasl.login.class = null kafka_1 | sasl.login.connect.timeout.ms = null kafka_1 | sasl.login.read.timeout.ms = null kafka_1 | sasl.login.refresh.buffer.seconds = 300 kafka_1 | sasl.login.refresh.min.period.seconds = 60 kafka_1 | sasl.login.refresh.window.factor = 0.8 kafka_1 | sasl.login.refresh.window.jitter = 0.05 kafka_1 | sasl.login.retry.backoff.max.ms = 10000 kafka_1 | sasl.login.retry.backoff.ms = 100 kafka_1 | sasl.mechanism.controller.protocol = GSSAPI kafka_1 | sasl.mechanism.inter.broker.protocol = GSSAPI kafka_1 | sasl.oauthbearer.clock.skew.seconds = 30 kafka_1 | sasl.oauthbearer.expected.audience = null kafka_1 | sasl.oauthbearer.expected.issuer = null kafka_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka_1 | sasl.oauthbearer.jwks.endpoint.url = null kafka_1 | sasl.oauthbearer.scope.claim.name = scope kafka_1 | sasl.oauthbearer.sub.claim.name = sub kafka_1 | sasl.oauthbearer.token.endpoint.url = null kafka_1 | sasl.server.callback.handler.class = null kafka_1 | security.inter.broker.protocol = PLAINTEXT kafka_1 | security.providers = null kafka_1 | socket.connection.setup.timeout.max.ms = 30000 kafka_1 | socket.connection.setup.timeout.ms = 10000 kafka_1 | socket.receive.buffer.bytes = 102400 kafka_1 | socket.request.max.bytes = 104857600 kafka_1 | socket.send.buffer.bytes = 102400 kafka_1 | ssl.cipher.suites = [] kafka_1 | ssl.client.auth = none kafka_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka_1 | ssl.endpoint.identification.algorithm = https kafka_1 | ssl.engine.factory.class = null kafka_1 | ssl.key.password = null kafka_1 | ssl.keymanager.algorithm = SunX509 kafka_1 | ssl.keystore.certificate.chain = null kafka_1 | ssl.keystore.key = null kafka_1 | ssl.keystore.location = null kafka_1 | ssl.keystore.password = null kafka_1 | ssl.keystore.type = JKS kafka_1 | ssl.principal.mapping.rules = DEFAULT kafka_1 | ssl.protocol = TLSv1.3 kafka_1 | ssl.provider = null kafka_1 | ssl.secure.random.implementation = null kafka_1 | ssl.trustmanager.algorithm = PKIX kafka_1 | ssl.truststore.certificates = null kafka_1 | ssl.truststore.location = null kafka_1 | ssl.truststore.password = null kafka_1 | ssl.truststore.type = JKS kafka_1 | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka_1 | transaction.max.timeout.ms = 900000 kafka_1 | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka_1 | transaction.state.log.load.buffer.size = 5242880 kafka_1 | transaction.state.log.min.isr = 1 kafka_1 | transaction.state.log.num.partitions = 50 kafka_1 | transaction.state.log.replication.factor = 1 kafka_1 | transaction.state.log.segment.bytes = 104857600 kafka_1 | transactional.id.expiration.ms = 604800000 kafka_1 | unclean.leader.election.enable = false kafka_1 | zookeeper.clientCnxnSocket = null kafka_1 | zookeeper.connect = zookeeper:2181 kafka_1 | zookeeper.connection.timeout.ms = 18000 kafka_1 | zookeeper.max.in.flight.requests = 10 kafka_1 | zookeeper.session.timeout.ms = 18000 kafka_1 | zookeeper.set.acl = false kafka_1 | zookeeper.ssl.cipher.suites = null kafka_1 | zookeeper.ssl.client.enable = false kafka_1 | zookeeper.ssl.crl.enable = false kafka_1 | zookeeper.ssl.enabled.protocols = null kafka_1 | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka_1 | zookeeper.ssl.keystore.location = null kafka_1 | zookeeper.ssl.keystore.password = null kafka_1 | zookeeper.ssl.keystore.type = null kafka_1 | zookeeper.ssl.ocsp.enable = false kafka_1 | zookeeper.ssl.protocol = TLSv1.2 kafka_1 | zookeeper.ssl.truststore.location = null kafka_1 | zookeeper.ssl.truststore.password = null kafka_1 | zookeeper.ssl.truststore.type = null kafka_1 | zookeeper.sync.time.ms = 2000 kafka_1 | kafka_1 | 2022-04-21 13:24:27,028 - INFO [ThrottledChannelReaper-Fetch:Logging@66] - [ThrottledChannelReaper-Fetch]: Starting kafka_1 | 2022-04-21 13:24:27,030 - INFO [ThrottledChannelReaper-Produce:Logging@66] - [ThrottledChannelReaper-Produce]: Starting kafka_1 | 2022-04-21 13:24:27,032 - INFO [ThrottledChannelReaper-Request:Logging@66] - [ThrottledChannelReaper-Request]: Starting kafka_1 | 2022-04-21 13:24:27,035 - INFO [ThrottledChannelReaper-ControllerMutation:Logging@66] - [ThrottledChannelReaper-ControllerMutation]: Starting kafka_1 | 2022-04-21 13:24:27,133 - INFO [main:Logging@66] - Loading logs from log dirs ArraySeq(/kafka/data/1) kafka_1 | 2022-04-21 13:24:27,143 - INFO [main:Logging@66] - Skipping recovery for all logs in /kafka/data/1 since clean shutdown file was found connect_1 | 2022-04-21 13:24:27,221 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mysql/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,222 INFO || Added plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,222 INFO || Added plugin 'io.debezium.converters.ByteBufferConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,222 INFO || Added plugin 'io.debezium.converters.CloudEventsConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,222 INFO || Added plugin 'io.debezium.transforms.outbox.EventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,222 INFO || Added plugin 'io.debezium.transforms.ExtractNewRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,223 INFO || Added plugin 'io.debezium.transforms.ContentBasedRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,223 INFO || Added plugin 'io.debezium.connector.mysql.transforms.ReadToInsertEvent' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,223 INFO || Added plugin 'io.debezium.transforms.ByLogicalTableRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,223 INFO || Added plugin 'io.debezium.transforms.tracing.ActivateTracingSpan' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,223 INFO || Added plugin 'io.debezium.transforms.Filter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,223 INFO || Added plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,223 INFO || Added plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,223 INFO || Added plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,230 INFO || Loading plugin from: /kafka/connect/debezium-connector-mongodb [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] kafka_1 | 2022-04-21 13:24:27,254 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-33, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,288 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-33, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=33, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 121ms (1/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,296 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-4, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,301 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-4, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=4, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (2/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,310 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-7, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,316 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-7, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=7, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 14ms (3/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,327 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-11, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,332 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-11, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=11, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 14ms (4/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,357 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_statuses-2, dir=/kafka/data/1] Loading producer state till offset 4 with message format version 2 kafka_1 | 2022-04-21 13:24:27,357 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_statuses-2, dir=/kafka/data/1] Reloading from producer snapshot and rebuilding producer state from offset 4 kafka_1 | 2022-04-21 13:24:27,361 - INFO [log-recovery-/kafka/data/1:Logging@66] - [ProducerStateManager partition=my_connect_statuses-2] Loading producer state from snapshot file 'SnapshotFile(/kafka/data/1/my_connect_statuses-2/00000000000000000004.snapshot,4)' kafka_1 | 2022-04-21 13:24:27,373 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_statuses-2, dir=/kafka/data/1] Producer state recovery took 15ms for snapshot load and 0ms for segment recovery from offset 4 kafka_1 | 2022-04-21 13:24:27,379 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_statuses-2, topicId=aYxjrvdDTam-gnQaJPAszA, topic=my_connect_statuses, partition=2, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=4) with 1 segments in 46ms (5/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,386 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-8, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,392 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-8, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=8, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (6/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,399 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-12, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,405 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-12, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=12, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (7/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,412 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-9, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,419 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-9, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=9, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (8/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,428 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-43, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,436 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-43, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=43, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 17ms (9/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,444 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-8, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,448 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-8, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=8, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (10/84 loaded in /kafka/data/1) connect_1 | 2022-04-21 13:24:27,453 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mongodb/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,453 INFO || Added plugin 'io.debezium.connector.mongodb.MongoDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,453 INFO || Added plugin 'io.debezium.connector.mongodb.transforms.outbox.MongoEventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,454 INFO || Added plugin 'io.debezium.connector.mongodb.transforms.ExtractNewDocumentState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,454 INFO || Loading plugin from: /kafka/connect/debezium-connector-db2 [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] kafka_1 | 2022-04-21 13:24:27,459 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-46, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,463 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-46, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=46, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 14ms (11/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,473 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-17, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,477 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-17, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=17, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (12/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,485 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-12, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,489 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-12, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=12, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (13/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,497 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-27, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,502 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-27, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=27, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (14/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,509 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-5, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,512 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-5, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=5, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (15/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,520 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-37, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,525 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-37, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=37, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (16/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,531 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-3, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,536 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-3, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=3, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (17/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,543 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-32, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,545 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-32, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=32, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (18/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,552 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-16, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,555 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-16, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=16, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (19/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,563 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-21, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,568 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-21, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=21, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (20/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,576 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-39, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,579 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-39, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=39, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (21/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,586 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-2, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,589 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-2, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=2, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (22/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,594 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-0, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,598 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-0, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (23/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,606 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-4, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,611 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-4, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=4, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (24/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,617 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-35, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 connect_1 | 2022-04-21 13:24:27,622 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-db2/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,622 INFO || Added plugin 'io.debezium.connector.db2.Db2Connector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,622 INFO || Loading plugin from: /kafka/connect/debezium-connector-postgres [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] kafka_1 | 2022-04-21 13:24:27,623 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-35, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=35, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (25/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,631 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-47, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,635 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-47, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=47, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (26/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,643 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-7, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,646 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-7, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=7, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (27/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,655 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-19, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,659 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-19, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=19, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (28/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,666 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-10, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,672 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-10, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=10, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (29/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,683 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=contact_db.schema-changes-0, dir=/kafka/data/1] Loading producer state till offset 2 with message format version 2 kafka_1 | 2022-04-21 13:24:27,683 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=contact_db.schema-changes-0, dir=/kafka/data/1] Reloading from producer snapshot and rebuilding producer state from offset 2 kafka_1 | 2022-04-21 13:24:27,683 - INFO [log-recovery-/kafka/data/1:Logging@66] - [ProducerStateManager partition=contact_db.schema-changes-0] Loading producer state from snapshot file 'SnapshotFile(/kafka/data/1/contact_db.schema-changes-0/00000000000000000002.snapshot,2)' kafka_1 | 2022-04-21 13:24:27,684 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=contact_db.schema-changes-0, dir=/kafka/data/1] Producer state recovery took 1ms for snapshot load and 0ms for segment recovery from offset 2 kafka_1 | 2022-04-21 13:24:27,687 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/contact_db.schema-changes-0, topicId=I-6FAdvZT2KHBSXH0d61iA, topic=contact_db.schema-changes, partition=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2) with 1 segments in 15ms (30/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,695 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-15, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,698 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-15, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=15, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (31/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,707 - INFO [log-recovery-/kafka/data/1:Logging@66] - Deleted producer state snapshot /kafka/data/1/__consumer_offsets-49/00000000000000000004.snapshot kafka_1 | 2022-04-21 13:24:27,708 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-49, dir=/kafka/data/1] Loading producer state till offset 8 with message format version 2 kafka_1 | 2022-04-21 13:24:27,708 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-49, dir=/kafka/data/1] Reloading from producer snapshot and rebuilding producer state from offset 8 kafka_1 | 2022-04-21 13:24:27,708 - INFO [log-recovery-/kafka/data/1:Logging@66] - [ProducerStateManager partition=__consumer_offsets-49] Loading producer state from snapshot file 'SnapshotFile(/kafka/data/1/__consumer_offsets-49/00000000000000000008.snapshot,8)' kafka_1 | 2022-04-21 13:24:27,708 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-49, dir=/kafka/data/1] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 8 kafka_1 | 2022-04-21 13:24:27,710 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-49, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=49, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=8) with 1 segments in 12ms (32/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,715 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-0, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,718 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-0, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (33/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,726 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-34, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,729 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-34, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=34, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (34/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,737 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-23, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,741 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-23, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=23, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (35/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,748 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_statuses-3, dir=/kafka/data/1] Loading producer state till offset 2 with message format version 2 kafka_1 | 2022-04-21 13:24:27,748 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_statuses-3, dir=/kafka/data/1] Reloading from producer snapshot and rebuilding producer state from offset 2 kafka_1 | 2022-04-21 13:24:27,748 - INFO [log-recovery-/kafka/data/1:Logging@66] - [ProducerStateManager partition=my_connect_statuses-3] Loading producer state from snapshot file 'SnapshotFile(/kafka/data/1/my_connect_statuses-3/00000000000000000002.snapshot,2)' kafka_1 | 2022-04-21 13:24:27,749 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_statuses-3, dir=/kafka/data/1] Producer state recovery took 1ms for snapshot load and 0ms for segment recovery from offset 2 kafka_1 | 2022-04-21 13:24:27,752 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_statuses-3, topicId=aYxjrvdDTam-gnQaJPAszA, topic=my_connect_statuses, partition=3, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2) with 1 segments in 11ms (36/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,760 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-38, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,762 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-38, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=38, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (37/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,769 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_statuses-4, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,773 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_statuses-4, topicId=aYxjrvdDTam-gnQaJPAszA, topic=my_connect_statuses, partition=4, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (38/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,780 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-9, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,785 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-9, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=9, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (39/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,792 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-14, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,796 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-14, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=14, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (40/84 loaded in /kafka/data/1) connect_1 | 2022-04-21 13:24:27,803 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-postgres/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:27,803 INFO || Added plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] kafka_1 | 2022-04-21 13:24:27,805 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-18, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,808 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-18, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=18, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (41/84 loaded in /kafka/data/1) connect_1 | 2022-04-21 13:24:27,809 INFO || Loading plugin from: /kafka/connect/debezium-connector-oracle [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] kafka_1 | 2022-04-21 13:24:27,813 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-26, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,816 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-26, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=26, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (42/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,826 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-23, dir=/kafka/data/1] Loading producer state till offset 2 with message format version 2 kafka_1 | 2022-04-21 13:24:27,827 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-23, dir=/kafka/data/1] Reloading from producer snapshot and rebuilding producer state from offset 2 kafka_1 | 2022-04-21 13:24:27,827 - INFO [log-recovery-/kafka/data/1:Logging@66] - [ProducerStateManager partition=my_connect_offsets-23] Loading producer state from snapshot file 'SnapshotFile(/kafka/data/1/my_connect_offsets-23/00000000000000000002.snapshot,2)' kafka_1 | 2022-04-21 13:24:27,828 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-23, dir=/kafka/data/1] Producer state recovery took 1ms for snapshot load and 0ms for segment recovery from offset 2 kafka_1 | 2022-04-21 13:24:27,831 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-23, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=23, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2) with 1 segments in 13ms (43/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,841 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-40, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,844 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-40, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=40, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (44/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,857 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-6, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,861 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-6, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=6, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 15ms (45/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,867 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-24, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,871 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-24, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=24, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (46/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,884 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=contact_db-0, dir=/kafka/data/1] Loading producer state till offset 6 with message format version 2 kafka_1 | 2022-04-21 13:24:27,884 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=contact_db-0, dir=/kafka/data/1] Reloading from producer snapshot and rebuilding producer state from offset 6 kafka_1 | 2022-04-21 13:24:27,884 - INFO [log-recovery-/kafka/data/1:Logging@66] - [ProducerStateManager partition=contact_db-0] Loading producer state from snapshot file 'SnapshotFile(/kafka/data/1/contact_db-0/00000000000000000006.snapshot,6)' kafka_1 | 2022-04-21 13:24:27,885 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=contact_db-0, dir=/kafka/data/1] Producer state recovery took 1ms for snapshot load and 0ms for segment recovery from offset 6 kafka_1 | 2022-04-21 13:24:27,888 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/contact_db-0, topicId=YoWfqSw9QnGTn7k_WJmAGg, topic=contact_db, partition=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=6) with 1 segments in 17ms (47/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,899 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-22, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,904 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-22, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=22, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (48/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,914 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-21, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,924 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-21, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=21, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 18ms (49/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,936 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-17, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,943 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-17, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=17, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 19ms (50/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,955 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-3, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,959 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-3, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=3, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 15ms (51/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,966 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-15, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,969 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-15, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=15, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (52/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,977 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-31, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,980 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-31, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=31, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (53/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,987 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-24, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:27,990 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-24, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=24, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (54/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:27,997 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-45, dir=/kafka/data/1] Loading producer state till offset 1 with message format version 2 kafka_1 | 2022-04-21 13:24:27,998 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-45, dir=/kafka/data/1] Reloading from producer snapshot and rebuilding producer state from offset 1 kafka_1 | 2022-04-21 13:24:27,998 - INFO [log-recovery-/kafka/data/1:Logging@66] - [ProducerStateManager partition=__consumer_offsets-45] Loading producer state from snapshot file 'SnapshotFile(/kafka/data/1/__consumer_offsets-45/00000000000000000001.snapshot,1)' kafka_1 | 2022-04-21 13:24:27,999 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-45, dir=/kafka/data/1] Producer state recovery took 1ms for snapshot load and 0ms for segment recovery from offset 1 kafka_1 | 2022-04-21 13:24:28,001 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-45, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=45, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=1) with 1 segments in 11ms (55/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,009 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-22, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,011 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-22, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=22, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (56/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,016 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-11, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,020 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-11, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=11, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (57/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,027 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-30, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,030 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-30, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=30, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (58/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,036 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_statuses-0, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,038 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_statuses-0, topicId=aYxjrvdDTam-gnQaJPAszA, topic=my_connect_statuses, partition=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (59/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,043 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-2, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,045 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-2, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=2, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (60/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,051 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-44, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,053 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-44, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=44, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (61/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,060 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-28, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,062 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-28, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=28, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (62/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,068 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-25, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,071 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-25, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=25, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (63/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,077 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-10, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,079 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-10, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=10, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (64/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,084 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-5, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,087 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-5, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=5, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (65/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,093 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-18, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,095 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-18, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=18, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (66/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,102 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-16, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,104 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-16, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=16, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (67/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,110 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-13, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,111 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-13, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=13, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (68/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,115 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_statuses-1, dir=/kafka/data/1] Loading producer state till offset 7 with message format version 2 kafka_1 | 2022-04-21 13:24:28,116 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_statuses-1, dir=/kafka/data/1] Reloading from producer snapshot and rebuilding producer state from offset 7 kafka_1 | 2022-04-21 13:24:28,116 - INFO [log-recovery-/kafka/data/1:Logging@66] - [ProducerStateManager partition=my_connect_statuses-1] Loading producer state from snapshot file 'SnapshotFile(/kafka/data/1/my_connect_statuses-1/00000000000000000007.snapshot,7)' kafka_1 | 2022-04-21 13:24:28,116 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_statuses-1, dir=/kafka/data/1] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 7 kafka_1 | 2022-04-21 13:24:28,119 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_statuses-1, topicId=aYxjrvdDTam-gnQaJPAszA, topic=my_connect_statuses, partition=1, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=7) with 1 segments in 8ms (69/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,126 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-20, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,128 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-20, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=20, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (70/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,133 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-14, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,136 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-14, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=14, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (71/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,143 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-13, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,145 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-13, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=13, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (72/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,153 - INFO [log-recovery-/kafka/data/1:Logging@66] - Deleted producer state snapshot /kafka/data/1/my_connect_configs-0/00000000000000000002.snapshot kafka_1 | 2022-04-21 13:24:28,154 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_configs-0, dir=/kafka/data/1] Loading producer state till offset 6 with message format version 2 kafka_1 | 2022-04-21 13:24:28,154 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_configs-0, dir=/kafka/data/1] Reloading from producer snapshot and rebuilding producer state from offset 6 kafka_1 | 2022-04-21 13:24:28,154 - INFO [log-recovery-/kafka/data/1:Logging@66] - [ProducerStateManager partition=my_connect_configs-0] Loading producer state from snapshot file 'SnapshotFile(/kafka/data/1/my_connect_configs-0/00000000000000000006.snapshot,6)' kafka_1 | 2022-04-21 13:24:28,154 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_configs-0, dir=/kafka/data/1] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 6 kafka_1 | 2022-04-21 13:24:28,156 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_configs-0, topicId=fML0POG8THqTjJyfWIg5aQ, topic=my_connect_configs, partition=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=6) with 1 segments in 11ms (73/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,164 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-29, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,166 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-29, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=29, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (74/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,173 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-36, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,175 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-36, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=36, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (75/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,180 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-1, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,182 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-1, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=1, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (76/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,189 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=contact.debezium.changes-0, dir=/kafka/data/1] Loading producer state till offset 1 with message format version 2 kafka_1 | 2022-04-21 13:24:28,189 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=contact.debezium.changes-0, dir=/kafka/data/1] Reloading from producer snapshot and rebuilding producer state from offset 1 kafka_1 | 2022-04-21 13:24:28,190 - INFO [log-recovery-/kafka/data/1:Logging@66] - [ProducerStateManager partition=contact.debezium.changes-0] Loading producer state from snapshot file 'SnapshotFile(/kafka/data/1/contact.debezium.changes-0/00000000000000000001.snapshot,1)' kafka_1 | 2022-04-21 13:24:28,190 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=contact.debezium.changes-0, dir=/kafka/data/1] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 1 kafka_1 | 2022-04-21 13:24:28,192 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/contact.debezium.changes-0, topicId=0RlDmGYWRauQMKie6FJ1CA, topic=contact.debezium.changes, partition=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=1) with 1 segments in 10ms (77/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,198 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-19, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,200 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-19, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=19, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (78/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,206 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-20, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,207 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-20, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=20, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (79/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,212 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-42, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,213 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-42, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=42, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (80/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,219 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-48, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,220 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-48, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=48, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (81/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,225 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-1, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,227 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-1, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=1, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (82/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,234 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=my_connect_offsets-6, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,236 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/my_connect_offsets-6, topicId=oEOpzldgQWWSRVA80nIXfw, topic=my_connect_offsets, partition=6, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (83/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,241 - INFO [log-recovery-/kafka/data/1:UnifiedLog$@1722] - [LogLoader partition=__consumer_offsets-41, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:24:28,242 - INFO [log-recovery-/kafka/data/1:Logging@66] - Completed load of Log(dir=/kafka/data/1/__consumer_offsets-41, topicId=Dv2RfarfQ8Osn4vXaQ1KOw, topic=__consumer_offsets, partition=41, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (84/84 loaded in /kafka/data/1) kafka_1 | 2022-04-21 13:24:28,244 - INFO [main:Logging@66] - Loaded 84 logs in 1111ms. kafka_1 | 2022-04-21 13:24:28,245 - INFO [main:Logging@66] - Starting log cleanup with a period of 300000 ms. kafka_1 | 2022-04-21 13:24:28,247 - INFO [main:Logging@66] - Starting log flusher with a default period of 9223372036854775807 ms. connect_1 | 2022-04-21 13:24:28,356 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-oracle/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:28,356 INFO || Added plugin 'io.debezium.connector.oracle.OracleConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:28,356 INFO || Loading plugin from: /kafka/connect/debezium-connector-sqlserver [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] kafka_1 | 2022-04-21 13:24:28,597 - INFO [BrokerToControllerChannelManager broker=1 name=forwarding:Logging@66] - [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting connect_1 | 2022-04-21 13:24:28,625 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-sqlserver/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:28,625 INFO || Added plugin 'io.debezium.connector.sqlserver.SqlServerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:28,659 INFO || Loading plugin from: /kafka/connect/debezium-connector-vitess [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] kafka_1 | 2022-04-21 13:24:28,785 - INFO [main:Logging@66] - Updated connection-accept-rate max connection creation rate to 2147483647 kafka_1 | 2022-04-21 13:24:28,792 - INFO [main:Logging@66] - Awaiting socket connections on 172.19.0.4:9092. kafka_1 | 2022-04-21 13:24:28,838 - INFO [main:Logging@66] - [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) kafka_1 | 2022-04-21 13:24:28,848 - INFO [BrokerToControllerChannelManager broker=1 name=alterIsr:Logging@66] - [BrokerToControllerChannelManager broker=1 name=alterIsr]: Starting kafka_1 | 2022-04-21 13:24:28,879 - INFO [ExpirationReaper-1-Produce:Logging@66] - [ExpirationReaper-1-Produce]: Starting kafka_1 | 2022-04-21 13:24:28,882 - INFO [ExpirationReaper-1-Fetch:Logging@66] - [ExpirationReaper-1-Fetch]: Starting kafka_1 | 2022-04-21 13:24:28,886 - INFO [ExpirationReaper-1-DeleteRecords:Logging@66] - [ExpirationReaper-1-DeleteRecords]: Starting kafka_1 | 2022-04-21 13:24:28,889 - INFO [ExpirationReaper-1-ElectLeader:Logging@66] - [ExpirationReaper-1-ElectLeader]: Starting kafka_1 | 2022-04-21 13:24:28,914 - INFO [LogDirFailureHandler:Logging@66] - [LogDirFailureHandler]: Starting kafka_1 | 2022-04-21 13:24:28,992 - INFO [main:Logging@66] - Creating /brokers/ids/1 (is it secure? false) kafka_1 | 2022-04-21 13:24:29,032 - INFO [main:Logging@66] - Stat of the created znode at /brokers/ids/1 is: 280,280,1650547469022,1650547469022,1,0,0,72059631531393024,204,0,280 kafka_1 | kafka_1 | 2022-04-21 13:24:29,033 - INFO [main:Logging@66] - Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://172.19.0.4:9092, czxid (broker epoch): 280 connect_1 | 2022-04-21 13:24:29,109 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-vitess/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:29,110 INFO || Added plugin 'io.debezium.connector.vitess.VitessConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] kafka_1 | 2022-04-21 13:24:29,126 - INFO [ExpirationReaper-1-topic:Logging@66] - [ExpirationReaper-1-topic]: Starting kafka_1 | 2022-04-21 13:24:29,134 - INFO [ExpirationReaper-1-Heartbeat:Logging@66] - [ExpirationReaper-1-Heartbeat]: Starting kafka_1 | 2022-04-21 13:24:29,146 - INFO [ExpirationReaper-1-Rebalance:Logging@66] - [ExpirationReaper-1-Rebalance]: Starting kafka_1 | 2022-04-21 13:24:29,193 - INFO [main:Logging@66] - [GroupCoordinator 1]: Starting up. kafka_1 | 2022-04-21 13:24:29,235 - INFO [main:Logging@66] - [GroupCoordinator 1]: Startup complete. kafka_1 | 2022-04-21 13:24:29,335 - INFO [main:Logging@66] - [TransactionCoordinator id=1] Starting up. kafka_1 | 2022-04-21 13:24:29,344 - INFO [main:Logging@66] - [TransactionCoordinator id=1] Startup complete. kafka_1 | 2022-04-21 13:24:29,363 - INFO [TxnMarkerSenderThread-1:Logging@66] - [Transaction Marker Channel Manager 1]: Starting kafka_1 | 2022-04-21 13:24:29,462 - INFO [ExpirationReaper-1-AlterAcls:Logging@66] - [ExpirationReaper-1-AlterAcls]: Starting kafka_1 | 2022-04-21 13:24:29,527 - INFO [/config/changes-event-process-thread:Logging@66] - [/config/changes-event-process-thread]: Starting kafka_1 | 2022-04-21 13:24:29,595 - INFO [main:Logging@66] - [SocketServer listenerType=ZK_BROKER, nodeId=1] Starting socket server acceptors and processors kafka_1 | 2022-04-21 13:24:29,606 - INFO [main:Logging@66] - [SocketServer listenerType=ZK_BROKER, nodeId=1] Started data-plane acceptor and processor(s) for endpoint : ListenerName(PLAINTEXT) kafka_1 | 2022-04-21 13:24:29,607 - INFO [main:Logging@66] - [SocketServer listenerType=ZK_BROKER, nodeId=1] Started socket server acceptors and processors kafka_1 | 2022-04-21 13:24:29,620 - INFO [main:AppInfoParser$AppInfo@119] - Kafka version: 3.1.0 kafka_1 | 2022-04-21 13:24:29,620 - INFO [main:AppInfoParser$AppInfo@120] - Kafka commitId: 37edeed0777bacb3 kafka_1 | 2022-04-21 13:24:29,620 - INFO [main:AppInfoParser$AppInfo@121] - Kafka startTimeMs: 1650547469608 kafka_1 | 2022-04-21 13:24:29,629 - INFO [main:Logging@66] - [KafkaServer id=1] started kafka_1 | 2022-04-21 13:24:29,682 - INFO [BrokerToControllerChannelManager broker=1 name=alterIsr:Logging@66] - [BrokerToControllerChannelManager broker=1 name=alterIsr]: Recorded new controller, from now on will use broker 172.19.0.4:9092 (id: 1 rack: null) kafka_1 | 2022-04-21 13:24:29,721 - INFO [BrokerToControllerChannelManager broker=1 name=forwarding:Logging@66] - [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use broker 172.19.0.4:9092 (id: 1 rack: null) kafka_1 | 2022-04-21 13:24:29,798 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(my_connect_statuses-1, __consumer_offsets-37, my_connect_configs-0, __consumer_offsets-13, my_connect_offsets-14, __consumer_offsets-22, contact.debezium.changes-0, my_connect_offsets-7, __consumer_offsets-30, my_connect_statuses-2, my_connect_offsets-15, __consumer_offsets-8, __consumer_offsets-21, contact_db.schema-changes-0, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, my_connect_offsets-20, __consumer_offsets-25, my_connect_offsets-12, __consumer_offsets-35, my_connect_offsets-2, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-49, my_connect_offsets-1, my_connect_offsets-22, __consumer_offsets-23, my_connect_offsets-24, my_connect_offsets-11, my_connect_statuses-3, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, my_connect_offsets-19, __consumer_offsets-31, my_connect_statuses-0, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, my_connect_offsets-16, __consumer_offsets-15, __consumer_offsets-24, contact_db-0, my_connect_offsets-18, __consumer_offsets-38, my_connect_offsets-10, my_connect_statuses-4, __consumer_offsets-17, my_connect_offsets-23, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, my_connect_offsets-5, __consumer_offsets-14, my_connect_offsets-13, my_connect_offsets-4, my_connect_offsets-17, my_connect_offsets-6, my_connect_offsets-3, my_connect_offsets-21, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40, my_connect_offsets-0, my_connect_offsets-8, my_connect_offsets-9) kafka_1 | 2022-04-21 13:24:29,815 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,829 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,833 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,838 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-9 broker=1] Log loaded for partition my_connect_offsets-9 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,844 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,848 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition contact.debezium.changes-0 broker=1] Log loaded for partition contact.debezium.changes-0 with initial high watermark 1 kafka_1 | 2022-04-21 13:24:29,849 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition contact_db.schema-changes-0 broker=1] Log loaded for partition contact_db.schema-changes-0 with initial high watermark 2 kafka_1 | 2022-04-21 13:24:29,851 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-2 broker=1] Log loaded for partition my_connect_offsets-2 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,856 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,861 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-17 broker=1] Log loaded for partition my_connect_offsets-17 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,864 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,867 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-16 broker=1] Log loaded for partition my_connect_offsets-16 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,872 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,876 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_statuses-4 broker=1] Log loaded for partition my_connect_statuses-4 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,880 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,885 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,889 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-24 broker=1] Log loaded for partition my_connect_offsets-24 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,893 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,896 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,899 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 8 kafka_1 | 2022-04-21 13:24:29,901 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-1 broker=1] Log loaded for partition my_connect_offsets-1 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,905 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_statuses-3 broker=1] Log loaded for partition my_connect_statuses-3 with initial high watermark 2 kafka_1 | 2022-04-21 13:24:29,906 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,910 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-11 broker=1] Log loaded for partition my_connect_offsets-11 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,913 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,917 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,922 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-4 broker=1] Log loaded for partition my_connect_offsets-4 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,925 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,929 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-19 broker=1] Log loaded for partition my_connect_offsets-19 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,933 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,939 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,942 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,945 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,950 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_statuses-2 broker=1] Log loaded for partition my_connect_statuses-2 with initial high watermark 4 kafka_1 | 2022-04-21 13:24:29,951 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-10 broker=1] Log loaded for partition my_connect_offsets-10 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,955 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,958 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,961 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_configs-0 broker=1] Log loaded for partition my_connect_configs-0 with initial high watermark 6 kafka_1 | 2022-04-21 13:24:29,962 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,966 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-3 broker=1] Log loaded for partition my_connect_offsets-3 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,972 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-18 broker=1] Log loaded for partition my_connect_offsets-18 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,976 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,980 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,985 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,990 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-13 broker=1] Log loaded for partition my_connect_offsets-13 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,994 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_statuses-1 broker=1] Log loaded for partition my_connect_statuses-1 with initial high watermark 7 kafka_1 | 2022-04-21 13:24:29,994 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:29,998 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,002 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-6 broker=1] Log loaded for partition my_connect_offsets-6 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,007 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,011 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition contact_db-0 broker=1] Log loaded for partition contact_db-0 with initial high watermark 6 kafka_1 | 2022-04-21 13:24:30,011 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-21 broker=1] Log loaded for partition my_connect_offsets-21 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,015 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,020 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,024 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-12 broker=1] Log loaded for partition my_connect_offsets-12 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,027 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,030 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,035 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,039 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 1 kafka_1 | 2022-04-21 13:24:30,040 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-5 broker=1] Log loaded for partition my_connect_offsets-5 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,043 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-20 broker=1] Log loaded for partition my_connect_offsets-20 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,047 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,050 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_statuses-0 broker=1] Log loaded for partition my_connect_statuses-0 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,055 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,059 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,062 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-0 broker=1] Log loaded for partition my_connect_offsets-0 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,065 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,069 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-15 broker=1] Log loaded for partition my_connect_offsets-15 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,073 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,078 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,083 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-8 broker=1] Log loaded for partition my_connect_offsets-8 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,087 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,091 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-23 broker=1] Log loaded for partition my_connect_offsets-23 with initial high watermark 2 kafka_1 | 2022-04-21 13:24:30,092 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,095 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,098 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-14 broker=1] Log loaded for partition my_connect_offsets-14 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,102 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,106 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,110 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,113 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,116 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-7 broker=1] Log loaded for partition my_connect_offsets-7 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,120 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition my_connect_offsets-22 broker=1] Log loaded for partition my_connect_offsets-22 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,124 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,128 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 kafka_1 | 2022-04-21 13:24:30,149 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 kafka_1 | 2022-04-21 13:24:30,151 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 kafka_1 | 2022-04-21 13:24:30,154 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 kafka_1 | 2022-04-21 13:24:30,154 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 kafka_1 | 2022-04-21 13:24:30,154 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 kafka_1 | 2022-04-21 13:24:30,154 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 kafka_1 | 2022-04-21 13:24:30,154 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 kafka_1 | 2022-04-21 13:24:30,154 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 kafka_1 | 2022-04-21 13:24:30,154 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 kafka_1 | 2022-04-21 13:24:30,154 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 kafka_1 | 2022-04-21 13:24:30,154 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 kafka_1 | 2022-04-21 13:24:30,154 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 kafka_1 | 2022-04-21 13:24:30,154 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 kafka_1 | 2022-04-21 13:24:30,155 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 kafka_1 | 2022-04-21 13:24:30,155 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 kafka_1 | 2022-04-21 13:24:30,155 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 kafka_1 | 2022-04-21 13:24:30,155 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 kafka_1 | 2022-04-21 13:24:30,155 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 kafka_1 | 2022-04-21 13:24:30,155 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 kafka_1 | 2022-04-21 13:24:30,155 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 kafka_1 | 2022-04-21 13:24:30,155 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 kafka_1 | 2022-04-21 13:24:30,155 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 kafka_1 | 2022-04-21 13:24:30,155 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 kafka_1 | 2022-04-21 13:24:30,155 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 kafka_1 | 2022-04-21 13:24:30,155 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 kafka_1 | 2022-04-21 13:24:30,155 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 kafka_1 | 2022-04-21 13:24:30,156 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 kafka_1 | 2022-04-21 13:24:30,156 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 kafka_1 | 2022-04-21 13:24:30,156 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 kafka_1 | 2022-04-21 13:24:30,156 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 kafka_1 | 2022-04-21 13:24:30,156 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 kafka_1 | 2022-04-21 13:24:30,156 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 kafka_1 | 2022-04-21 13:24:30,156 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 kafka_1 | 2022-04-21 13:24:30,156 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 kafka_1 | 2022-04-21 13:24:30,156 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 kafka_1 | 2022-04-21 13:24:30,156 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 kafka_1 | 2022-04-21 13:24:30,156 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 kafka_1 | 2022-04-21 13:24:30,156 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 kafka_1 | 2022-04-21 13:24:30,156 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 kafka_1 | 2022-04-21 13:24:30,157 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 kafka_1 | 2022-04-21 13:24:30,157 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 kafka_1 | 2022-04-21 13:24:30,157 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 kafka_1 | 2022-04-21 13:24:30,157 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 kafka_1 | 2022-04-21 13:24:30,157 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 kafka_1 | 2022-04-21 13:24:30,157 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 kafka_1 | 2022-04-21 13:24:30,157 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 kafka_1 | 2022-04-21 13:24:30,158 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 kafka_1 | 2022-04-21 13:24:30,158 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 kafka_1 | 2022-04-21 13:24:30,158 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 kafka_1 | 2022-04-21 13:24:30,158 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 kafka_1 | 2022-04-21 13:24:30,158 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 kafka_1 | 2022-04-21 13:24:30,159 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 kafka_1 | 2022-04-21 13:24:30,159 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 kafka_1 | 2022-04-21 13:24:30,159 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 kafka_1 | 2022-04-21 13:24:30,159 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 kafka_1 | 2022-04-21 13:24:30,159 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 kafka_1 | 2022-04-21 13:24:30,159 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 kafka_1 | 2022-04-21 13:24:30,159 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 kafka_1 | 2022-04-21 13:24:30,159 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 kafka_1 | 2022-04-21 13:24:30,159 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 kafka_1 | 2022-04-21 13:24:30,159 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 kafka_1 | 2022-04-21 13:24:30,159 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 kafka_1 | 2022-04-21 13:24:30,159 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 kafka_1 | 2022-04-21 13:24:30,159 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 kafka_1 | 2022-04-21 13:24:30,159 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 kafka_1 | 2022-04-21 13:24:30,160 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 kafka_1 | 2022-04-21 13:24:30,160 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 kafka_1 | 2022-04-21 13:24:30,160 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 kafka_1 | 2022-04-21 13:24:30,160 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 kafka_1 | 2022-04-21 13:24:30,160 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 kafka_1 | 2022-04-21 13:24:30,160 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 kafka_1 | 2022-04-21 13:24:30,160 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 kafka_1 | 2022-04-21 13:24:30,161 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 kafka_1 | 2022-04-21 13:24:30,161 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 kafka_1 | 2022-04-21 13:24:30,161 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 kafka_1 | 2022-04-21 13:24:30,161 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 kafka_1 | 2022-04-21 13:24:30,161 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 kafka_1 | 2022-04-21 13:24:30,161 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 kafka_1 | 2022-04-21 13:24:30,161 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 kafka_1 | 2022-04-21 13:24:30,161 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 kafka_1 | 2022-04-21 13:24:30,161 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 kafka_1 | 2022-04-21 13:24:30,161 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 kafka_1 | 2022-04-21 13:24:30,161 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 kafka_1 | 2022-04-21 13:24:30,161 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 kafka_1 | 2022-04-21 13:24:30,161 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 kafka_1 | 2022-04-21 13:24:30,162 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 kafka_1 | 2022-04-21 13:24:30,162 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 kafka_1 | 2022-04-21 13:24:30,162 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 kafka_1 | 2022-04-21 13:24:30,162 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 kafka_1 | 2022-04-21 13:24:30,162 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 kafka_1 | 2022-04-21 13:24:30,162 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 kafka_1 | 2022-04-21 13:24:30,162 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 kafka_1 | 2022-04-21 13:24:30,162 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 kafka_1 | 2022-04-21 13:24:30,162 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 kafka_1 | 2022-04-21 13:24:30,162 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 kafka_1 | 2022-04-21 13:24:30,162 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 kafka_1 | 2022-04-21 13:24:30,162 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 kafka_1 | 2022-04-21 13:24:30,163 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 kafka_1 | 2022-04-21 13:24:30,163 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 kafka_1 | 2022-04-21 13:24:30,163 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 kafka_1 | 2022-04-21 13:24:30,165 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 13 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,166 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,166 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,166 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,167 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,167 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,170 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,170 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,171 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,171 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,171 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. connect_1 | 2022-04-21 13:24:30,194 INFO || Registered loader: jdk.internal.loader.ClassLoaders$AppClassLoader@3d4eac69 [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,194 INFO || Added plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,195 INFO || Added plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,195 INFO || Added plugin 'org.apache.kafka.connect.tools.SchemaSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,195 INFO || Added plugin 'org.apache.kafka.connect.tools.MockSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,195 INFO || Added plugin 'org.apache.kafka.connect.file.FileStreamSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,195 INFO || Added plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,195 INFO || Added plugin 'org.apache.kafka.connect.tools.VerifiableSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,195 INFO || Added plugin 'org.apache.kafka.connect.tools.VerifiableSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,195 INFO || Added plugin 'org.apache.kafka.connect.tools.MockSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,195 INFO || Added plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,195 INFO || Added plugin 'org.apache.kafka.connect.tools.MockConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,195 INFO || Added plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,196 INFO || Added plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,196 INFO || Added plugin 'io.confluent.connect.avro.AvroConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,196 INFO || Added plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,196 INFO || Added plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,196 INFO || Added plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,196 INFO || Added plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,196 INFO || Added plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,196 INFO || Added plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,196 INFO || Added plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,196 INFO || Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,196 INFO || Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,196 INFO || Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,196 INFO || Added plugin 'org.apache.kafka.connect.transforms.Filter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,196 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,197 INFO || Added plugin 'org.apache.kafka.connect.transforms.HeaderFrom$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,197 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,197 INFO || Added plugin 'org.apache.kafka.connect.transforms.MaskField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,197 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,197 INFO || Added plugin 'org.apache.kafka.connect.transforms.RegexRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,197 INFO || Added plugin 'org.apache.kafka.connect.transforms.HoistField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,197 INFO || Added plugin 'org.apache.kafka.connect.transforms.ValueToKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,197 INFO || Added plugin 'org.apache.kafka.connect.transforms.MaskField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,197 INFO || Added plugin 'org.apache.kafka.connect.transforms.DropHeaders' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,197 INFO || Added plugin 'org.apache.kafka.connect.transforms.Cast$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,197 INFO || Added plugin 'org.apache.kafka.connect.transforms.Cast$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,197 INFO || Added plugin 'org.apache.kafka.connect.runtime.PredicatedTransformation' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,197 INFO || Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,198 INFO || Added plugin 'org.apache.kafka.connect.transforms.Flatten$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,198 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertHeader' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,198 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,198 INFO || Added plugin 'org.apache.kafka.connect.transforms.Flatten$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,198 INFO || Added plugin 'org.apache.kafka.connect.transforms.HeaderFrom$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,198 INFO || Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,198 INFO || Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,198 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,198 INFO || Added plugin 'org.apache.kafka.connect.transforms.HoistField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,198 INFO || Added plugin 'org.apache.kafka.connect.transforms.predicates.HasHeaderKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,199 INFO || Added plugin 'org.apache.kafka.connect.transforms.predicates.RecordIsTombstone' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,199 INFO || Added plugin 'org.apache.kafka.connect.transforms.predicates.TopicNameMatches' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,199 INFO || Added plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,199 INFO || Added plugin 'org.apache.kafka.common.config.provider.DirectoryConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,199 INFO || Added plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,201 INFO || Added aliases 'Db2Connector' and 'Db2' to plugin 'io.debezium.connector.db2.Db2Connector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,202 INFO || Added aliases 'MongoDbConnector' and 'MongoDb' to plugin 'io.debezium.connector.mongodb.MongoDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,202 INFO || Added aliases 'MySqlConnector' and 'MySql' to plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,202 INFO || Added aliases 'OracleConnector' and 'Oracle' to plugin 'io.debezium.connector.oracle.OracleConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,202 INFO || Added aliases 'PostgresConnector' and 'Postgres' to plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,203 INFO || Added aliases 'SqlServerConnector' and 'SqlServer' to plugin 'io.debezium.connector.sqlserver.SqlServerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,203 INFO || Added aliases 'VitessConnector' and 'Vitess' to plugin 'io.debezium.connector.vitess.VitessConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,203 INFO || Added aliases 'FileStreamSinkConnector' and 'FileStreamSink' to plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,204 INFO || Added aliases 'FileStreamSourceConnector' and 'FileStreamSource' to plugin 'org.apache.kafka.connect.file.FileStreamSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,204 INFO || Added aliases 'MirrorCheckpointConnector' and 'MirrorCheckpoint' to plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,204 INFO || Added aliases 'MirrorHeartbeatConnector' and 'MirrorHeartbeat' to plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,204 INFO || Added aliases 'MirrorSourceConnector' and 'MirrorSource' to plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,205 INFO || Added aliases 'MockConnector' and 'Mock' to plugin 'org.apache.kafka.connect.tools.MockConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,205 INFO || Added aliases 'MockSinkConnector' and 'MockSink' to plugin 'org.apache.kafka.connect.tools.MockSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,205 INFO || Added aliases 'MockSourceConnector' and 'MockSource' to plugin 'org.apache.kafka.connect.tools.MockSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,205 INFO || Added aliases 'SchemaSourceConnector' and 'SchemaSource' to plugin 'org.apache.kafka.connect.tools.SchemaSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,206 INFO || Added aliases 'VerifiableSinkConnector' and 'VerifiableSink' to plugin 'org.apache.kafka.connect.tools.VerifiableSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,206 INFO || Added aliases 'VerifiableSourceConnector' and 'VerifiableSource' to plugin 'org.apache.kafka.connect.tools.VerifiableSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,206 INFO || Added aliases 'AvroConverter' and 'Avro' to plugin 'io.confluent.connect.avro.AvroConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,207 INFO || Added aliases 'ByteBufferConverter' and 'ByteBuffer' to plugin 'io.debezium.converters.ByteBufferConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,207 INFO || Added aliases 'CloudEventsConverter' and 'CloudEvents' to plugin 'io.debezium.converters.CloudEventsConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] kafka_1 | 2022-04-21 13:24:30,212 - INFO [group-metadata-manager-0:GroupMetadata$@127] - Loaded member MemberMetadata(memberId=connect-1-8685b760-261e-454b-b0a8-6a037eeba405, groupInstanceId=None, clientId=connect-1, clientHost=/172.19.0.5, sessionTimeoutMs=10000, rebalanceTimeoutMs=60000, supportedProtocols=List(sessioned)) in group 1 with generation 1. connect_1 | 2022-04-21 13:24:30,207 INFO || Added aliases 'ByteArrayConverter' and 'ByteArray' to plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,207 INFO || Added aliases 'DoubleConverter' and 'Double' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,207 INFO || Added aliases 'FloatConverter' and 'Float' to plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] kafka_1 | 2022-04-21 13:24:30,212 - INFO [group-metadata-manager-0:GroupMetadata$@127] - Loaded member MemberMetadata(memberId=connect-1-d642cad1-773f-4b24-800a-2c3402e011e7, groupInstanceId=None, clientId=connect-1, clientHost=/172.19.0.5, sessionTimeoutMs=10000, rebalanceTimeoutMs=60000, supportedProtocols=List(sessioned)) in group 1 with generation 3. connect_1 | 2022-04-21 13:24:30,207 INFO || Added aliases 'IntegerConverter' and 'Integer' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,208 INFO || Added aliases 'LongConverter' and 'Long' to plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,208 INFO || Added aliases 'ShortConverter' and 'Short' to plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,208 INFO || Added aliases 'JsonConverter' and 'Json' to plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] kafka_1 | 2022-04-21 13:24:30,213 - INFO [group-metadata-manager-0:GroupMetadata$@127] - Loaded member MemberMetadata(memberId=connect-1-b880ae3b-505c-4f29-b7d2-7498a80b4e74, groupInstanceId=None, clientId=connect-1, clientHost=/172.19.0.5, sessionTimeoutMs=10000, rebalanceTimeoutMs=60000, supportedProtocols=List(sessioned)) in group 1 with generation 5. connect_1 | 2022-04-21 13:24:30,208 INFO || Added aliases 'StringConverter' and 'String' to plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,208 INFO || Added aliases 'ByteBufferConverter' and 'ByteBuffer' to plugin 'io.debezium.converters.ByteBufferConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,208 INFO || Added aliases 'ByteArrayConverter' and 'ByteArray' to plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] kafka_1 | 2022-04-21 13:24:30,213 - INFO [group-metadata-manager-0:GroupMetadata$@127] - Loaded member MemberMetadata(memberId=connect-1-b880ae3b-505c-4f29-b7d2-7498a80b4e74, groupInstanceId=None, clientId=connect-1, clientHost=/172.19.0.5, sessionTimeoutMs=10000, rebalanceTimeoutMs=60000, supportedProtocols=List(sessioned)) in group 1 with generation 6. connect_1 | 2022-04-21 13:24:30,208 INFO || Added aliases 'DoubleConverter' and 'Double' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,209 INFO || Added aliases 'FloatConverter' and 'Float' to plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,209 INFO || Added aliases 'IntegerConverter' and 'Integer' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] kafka_1 | 2022-04-21 13:24:30,214 - INFO [group-metadata-manager-0:GroupMetadata$@127] - Loaded member MemberMetadata(memberId=connect-1-b880ae3b-505c-4f29-b7d2-7498a80b4e74, groupInstanceId=None, clientId=connect-1, clientHost=/172.19.0.5, sessionTimeoutMs=10000, rebalanceTimeoutMs=60000, supportedProtocols=List(sessioned)) in group 1 with generation 7. connect_1 | 2022-04-21 13:24:30,209 INFO || Added aliases 'LongConverter' and 'Long' to plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,209 INFO || Added aliases 'ShortConverter' and 'Short' to plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,209 INFO || Added aliases 'JsonConverter' and 'Json' to plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,209 INFO || Added alias 'SimpleHeaderConverter' to plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,209 INFO || Added aliases 'StringConverter' and 'String' to plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,210 INFO || Added alias 'ExtractNewDocumentState' to plugin 'io.debezium.connector.mongodb.transforms.ExtractNewDocumentState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,210 INFO || Added alias 'MongoEventRouter' to plugin 'io.debezium.connector.mongodb.transforms.outbox.MongoEventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] kafka_1 | 2022-04-21 13:24:30,217 - INFO [group-metadata-manager-0:Logging@66] - [GroupCoordinator 1]: Loading group metadata for 1 with generation 8 connect_1 | 2022-04-21 13:24:30,210 INFO || Added alias 'ReadToInsertEvent' to plugin 'io.debezium.connector.mysql.transforms.ReadToInsertEvent' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,210 INFO || Added alias 'ByLogicalTableRouter' to plugin 'io.debezium.transforms.ByLogicalTableRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,211 INFO || Added alias 'ContentBasedRouter' to plugin 'io.debezium.transforms.ContentBasedRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,211 INFO || Added alias 'ExtractNewRecordState' to plugin 'io.debezium.transforms.ExtractNewRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,211 INFO || Added alias 'EventRouter' to plugin 'io.debezium.transforms.outbox.EventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,211 INFO || Added alias 'ActivateTracingSpan' to plugin 'io.debezium.transforms.tracing.ActivateTracingSpan' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,211 INFO || Added aliases 'PredicatedTransformation' and 'Predicated' to plugin 'org.apache.kafka.connect.runtime.PredicatedTransformation' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,212 INFO || Added alias 'DropHeaders' to plugin 'org.apache.kafka.connect.transforms.DropHeaders' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,212 INFO || Added alias 'InsertHeader' to plugin 'org.apache.kafka.connect.transforms.InsertHeader' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,212 INFO || Added alias 'RegexRouter' to plugin 'org.apache.kafka.connect.transforms.RegexRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,212 INFO || Added alias 'TimestampRouter' to plugin 'org.apache.kafka.connect.transforms.TimestampRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,213 INFO || Added alias 'ValueToKey' to plugin 'org.apache.kafka.connect.transforms.ValueToKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,213 INFO || Added alias 'HasHeaderKey' to plugin 'org.apache.kafka.connect.transforms.predicates.HasHeaderKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,213 INFO || Added alias 'RecordIsTombstone' to plugin 'org.apache.kafka.connect.transforms.predicates.RecordIsTombstone' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,213 INFO || Added alias 'TopicNameMatches' to plugin 'org.apache.kafka.connect.transforms.predicates.TopicNameMatches' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,213 INFO || Added alias 'BasicAuthSecurityRestExtension' to plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] kafka_1 | 2022-04-21 13:24:30,218 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 63 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. connect_1 | 2022-04-21 13:24:30,213 INFO || Added aliases 'AllConnectorClientConfigOverridePolicy' and 'All' to plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] connect_1 | 2022-04-21 13:24:30,213 INFO || Added aliases 'NoneConnectorClientConfigOverridePolicy' and 'None' to plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] kafka_1 | 2022-04-21 13:24:30,219 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 64 milliseconds for epoch 0, of which 64 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,220 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 64 milliseconds for epoch 0, of which 64 milliseconds was spent in the scheduler. connect_1 | 2022-04-21 13:24:30,213 INFO || Added aliases 'PrincipalConnectorClientConfigOverridePolicy' and 'Principal' to plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] kafka_1 | 2022-04-21 13:24:30,221 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 65 milliseconds for epoch 0, of which 65 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,222 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 66 milliseconds for epoch 0, of which 65 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,223 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 67 milliseconds for epoch 0, of which 67 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,223 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 67 milliseconds for epoch 0, of which 67 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,224 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 68 milliseconds for epoch 0, of which 68 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,224 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 67 milliseconds for epoch 0, of which 67 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,225 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 68 milliseconds for epoch 0, of which 68 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,225 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 68 milliseconds for epoch 0, of which 68 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,225 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 68 milliseconds for epoch 0, of which 68 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,225 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 67 milliseconds for epoch 0, of which 67 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,226 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 68 milliseconds for epoch 0, of which 68 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,226 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 67 milliseconds for epoch 0, of which 67 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,226 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 67 milliseconds for epoch 0, of which 67 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,226 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 67 milliseconds for epoch 0, of which 67 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,226 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 67 milliseconds for epoch 0, of which 67 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,227 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 68 milliseconds for epoch 0, of which 68 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,227 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 68 milliseconds for epoch 0, of which 68 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,227 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 68 milliseconds for epoch 0, of which 68 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,227 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 67 milliseconds for epoch 0, of which 67 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,227 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 67 milliseconds for epoch 0, of which 67 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,231 - INFO [group-metadata-manager-0:GroupMetadata$@127] - Loaded member MemberMetadata(memberId=console-consumer-1a523512-c5fa-4c5d-b8f3-e39ea5a248b8, groupInstanceId=None, clientId=console-consumer, clientHost=/172.19.0.1, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range)) in group console-consumer-83427 with generation 1. kafka_1 | 2022-04-21 13:24:30,238 - INFO [group-metadata-manager-0:Logging@66] - [GroupCoordinator 1]: Loading group metadata for console-consumer-83427 with generation 1 kafka_1 | 2022-04-21 13:24:30,243 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 83 milliseconds for epoch 0, of which 67 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,244 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 83 milliseconds for epoch 0, of which 83 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,244 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 83 milliseconds for epoch 0, of which 83 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,245 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 84 milliseconds for epoch 0, of which 83 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,245 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 84 milliseconds for epoch 0, of which 84 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,245 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 84 milliseconds for epoch 0, of which 84 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,246 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 85 milliseconds for epoch 0, of which 85 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,246 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 85 milliseconds for epoch 0, of which 85 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,247 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 84 milliseconds for epoch 0, of which 84 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,247 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 85 milliseconds for epoch 0, of which 85 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,247 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 85 milliseconds for epoch 0, of which 85 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,247 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 85 milliseconds for epoch 0, of which 85 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,248 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 86 milliseconds for epoch 0, of which 86 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,248 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 86 milliseconds for epoch 0, of which 86 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,249 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 86 milliseconds for epoch 0, of which 86 milliseconds was spent in the scheduler. kafka_1 | 2022-04-21 13:24:30,250 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 86 milliseconds for epoch 0, of which 86 milliseconds was spent in the scheduler. connect_1 | 2022-04-21 13:24:30,300 INFO || DistributedConfig values: connect_1 | access.control.allow.methods = connect_1 | access.control.allow.origin = connect_1 | admin.listeners = null connect_1 | bootstrap.servers = [kafka:9092] connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = connect_1 | config.providers = [] connect_1 | config.storage.replication.factor = 1 connect_1 | config.storage.topic = my_connect_configs connect_1 | connect.protocol = sessioned connect_1 | connections.max.idle.ms = 540000 connect_1 | connector.client.config.override.policy = All connect_1 | group.id = 1 connect_1 | header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter connect_1 | heartbeat.interval.ms = 3000 connect_1 | inter.worker.key.generation.algorithm = HmacSHA256 connect_1 | inter.worker.key.size = null connect_1 | inter.worker.key.ttl.ms = 3600000 connect_1 | inter.worker.signature.algorithm = HmacSHA256 connect_1 | inter.worker.verification.algorithms = [HmacSHA256] connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | listeners = [http://:8083] connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | offset.flush.interval.ms = 60000 connect_1 | offset.flush.timeout.ms = 5000 connect_1 | offset.storage.partitions = 25 connect_1 | offset.storage.replication.factor = 1 connect_1 | offset.storage.topic = my_connect_offsets connect_1 | plugin.path = [/kafka/connect] connect_1 | rebalance.timeout.ms = 60000 connect_1 | receive.buffer.bytes = 32768 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 40000 connect_1 | response.http.headers.config = connect_1 | rest.advertised.host.name = 172.19.0.5 connect_1 | rest.advertised.listener = null connect_1 | rest.advertised.port = 8083 connect_1 | rest.extension.classes = [] connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | scheduled.rebalance.max.delay.ms = 300000 connect_1 | security.protocol = PLAINTEXT connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 10000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.client.auth = none connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | status.storage.partitions = 5 connect_1 | status.storage.replication.factor = 1 connect_1 | status.storage.topic = my_connect_statuses connect_1 | task.shutdown.graceful.timeout.ms = 10000 connect_1 | topic.creation.enable = true connect_1 | topic.tracking.allow.reset = true connect_1 | topic.tracking.enable = true connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | worker.sync.timeout.ms = 3000 connect_1 | worker.unsync.backoff.ms = 300000 connect_1 | [org.apache.kafka.connect.runtime.distributed.DistributedConfig] connect_1 | 2022-04-21 13:24:30,304 INFO || Creating Kafka admin client [org.apache.kafka.connect.util.ConnectUtils] connect_1 | 2022-04-21 13:24:30,308 INFO || AdminClientConfig values: connect_1 | bootstrap.servers = [kafka:9092] connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = connect_1 | connections.max.idle.ms = 300000 connect_1 | default.api.timeout.ms = 60000 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 2147483647 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,415 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,415 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,415 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,415 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,415 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,415 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,415 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,415 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,415 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,415 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,416 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,416 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,416 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,416 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,416 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,416 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,416 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,416 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,416 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:30,418 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:30,418 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:30,418 INFO || Kafka startTimeMs: 1650547470417 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:30,823 INFO || Kafka cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.connect.util.ConnectUtils] connect_1 | 2022-04-21 13:24:30,824 INFO || App info kafka.admin.client for adminclient-1 unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:30,831 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:30,832 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:30,832 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:30,844 INFO || Logging initialized @4971ms to org.eclipse.jetty.util.log.Slf4jLog [org.eclipse.jetty.util.log] connect_1 | 2022-04-21 13:24:30,908 INFO || Added connector for http://:8083 [org.apache.kafka.connect.runtime.rest.RestServer] connect_1 | 2022-04-21 13:24:30,909 INFO || Initializing REST server [org.apache.kafka.connect.runtime.rest.RestServer] connect_1 | 2022-04-21 13:24:30,919 INFO || jetty-9.4.43.v20210629; built: 2021-06-30T11:07:22.254Z; git: 526006ecfa3af7f1a27ef3a288e2bef7ea9dd7e8; jvm 11.0.14.1+1 [org.eclipse.jetty.server.Server] connect_1 | 2022-04-21 13:24:30,986 INFO || Started http_8083@f2d890c{HTTP/1.1, (http/1.1)}{0.0.0.0:8083} [org.eclipse.jetty.server.AbstractConnector] connect_1 | 2022-04-21 13:24:30,986 INFO || Started @5113ms [org.eclipse.jetty.server.Server] connect_1 | 2022-04-21 13:24:31,012 INFO || Advertised URI: http://172.19.0.5:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] connect_1 | 2022-04-21 13:24:31,012 INFO || REST server listening at http://172.19.0.5:8083/, advertising URL http://172.19.0.5:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] connect_1 | 2022-04-21 13:24:31,013 INFO || Advertised URI: http://172.19.0.5:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] connect_1 | 2022-04-21 13:24:31,013 INFO || REST admin endpoints at http://172.19.0.5:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] connect_1 | 2022-04-21 13:24:31,013 INFO || Advertised URI: http://172.19.0.5:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] connect_1 | 2022-04-21 13:24:31,018 INFO || Creating Kafka admin client [org.apache.kafka.connect.util.ConnectUtils] connect_1 | 2022-04-21 13:24:31,019 INFO || AdminClientConfig values: connect_1 | bootstrap.servers = [kafka:9092] connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = connect_1 | connections.max.idle.ms = 300000 connect_1 | default.api.timeout.ms = 60000 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 2147483647 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,024 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,024 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,024 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,024 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,024 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,024 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,024 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,024 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,025 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,025 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,025 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,025 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,025 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,025 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,025 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,025 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,025 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,025 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,025 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,025 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,025 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,025 INFO || Kafka startTimeMs: 1650547471025 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,046 INFO || Kafka cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.connect.util.ConnectUtils] connect_1 | 2022-04-21 13:24:31,047 INFO || App info kafka.admin.client for adminclient-2 unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,053 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:31,054 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:31,054 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:31,062 INFO || Setting up All Policy for ConnectorClientConfigOverride. This will allow all client configurations to be overridden [org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy] connect_1 | 2022-04-21 13:24:31,069 INFO || Creating Kafka admin client [org.apache.kafka.connect.util.ConnectUtils] connect_1 | 2022-04-21 13:24:31,070 INFO || AdminClientConfig values: connect_1 | bootstrap.servers = [kafka:9092] connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = connect_1 | connections.max.idle.ms = 300000 connect_1 | default.api.timeout.ms = 60000 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 2147483647 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,074 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,074 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,074 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,074 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,074 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,074 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,074 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,074 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,074 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,074 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,074 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,074 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,074 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,074 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,074 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,074 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,074 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,074 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,075 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,075 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,075 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,075 INFO || Kafka startTimeMs: 1650547471075 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,096 INFO || Kafka cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.connect.util.ConnectUtils] connect_1 | 2022-04-21 13:24:31,097 INFO || App info kafka.admin.client for adminclient-3 unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,100 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:31,100 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:31,101 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:31,107 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,107 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,107 INFO || Kafka startTimeMs: 1650547471107 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,236 INFO || JsonConverterConfig values: connect_1 | converter.type = key connect_1 | decimal.format = BASE64 connect_1 | schemas.cache.size = 1000 connect_1 | schemas.enable = false connect_1 | [org.apache.kafka.connect.json.JsonConverterConfig] connect_1 | 2022-04-21 13:24:31,238 INFO || JsonConverterConfig values: connect_1 | converter.type = value connect_1 | decimal.format = BASE64 connect_1 | schemas.cache.size = 1000 connect_1 | schemas.enable = false connect_1 | [org.apache.kafka.connect.json.JsonConverterConfig] connect_1 | 2022-04-21 13:24:31,238 INFO || Creating Kafka admin client [org.apache.kafka.connect.util.ConnectUtils] connect_1 | 2022-04-21 13:24:31,238 INFO || AdminClientConfig values: connect_1 | bootstrap.servers = [kafka:9092] connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = connect_1 | connections.max.idle.ms = 300000 connect_1 | default.api.timeout.ms = 60000 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 2147483647 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,242 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,243 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,243 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,243 INFO || Kafka startTimeMs: 1650547471243 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,259 INFO || Kafka cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.connect.util.ConnectUtils] connect_1 | 2022-04-21 13:24:31,260 INFO || App info kafka.admin.client for adminclient-4 unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,263 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:31,263 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:31,263 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:31,273 INFO || Creating Kafka admin client [org.apache.kafka.connect.util.ConnectUtils] connect_1 | 2022-04-21 13:24:31,273 INFO || AdminClientConfig values: connect_1 | bootstrap.servers = [kafka:9092] connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = connect_1 | connections.max.idle.ms = 300000 connect_1 | default.api.timeout.ms = 60000 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 2147483647 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,277 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,277 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,277 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,277 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,277 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,277 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,277 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,277 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,277 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,277 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,277 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,277 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,277 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,277 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,277 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,278 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,278 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,278 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,278 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,278 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,278 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,278 INFO || Kafka startTimeMs: 1650547471278 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,294 INFO || Kafka cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.connect.util.ConnectUtils] connect_1 | 2022-04-21 13:24:31,295 INFO || App info kafka.admin.client for adminclient-5 unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,300 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:31,300 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:31,300 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:31,307 INFO || Creating Kafka admin client [org.apache.kafka.connect.util.ConnectUtils] connect_1 | 2022-04-21 13:24:31,308 INFO || AdminClientConfig values: connect_1 | bootstrap.servers = [kafka:9092] connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = connect_1 | connections.max.idle.ms = 300000 connect_1 | default.api.timeout.ms = 60000 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 2147483647 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,313 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,313 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,313 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,313 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,313 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,313 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,313 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,313 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,313 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,313 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,313 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,314 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,314 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,314 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,314 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,314 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,314 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,314 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,314 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,314 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,314 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,314 INFO || Kafka startTimeMs: 1650547471314 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,327 INFO || Kafka cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.connect.util.ConnectUtils] connect_1 | 2022-04-21 13:24:31,328 INFO || App info kafka.admin.client for adminclient-6 unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,329 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:31,330 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:31,330 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:31,348 INFO || Creating Kafka admin client [org.apache.kafka.connect.util.ConnectUtils] connect_1 | 2022-04-21 13:24:31,348 INFO || AdminClientConfig values: connect_1 | bootstrap.servers = [kafka:9092] connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = connect_1 | connections.max.idle.ms = 300000 connect_1 | default.api.timeout.ms = 60000 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 2147483647 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,351 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,351 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,351 INFO || Kafka startTimeMs: 1650547471351 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,365 INFO || Kafka cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.connect.util.ConnectUtils] connect_1 | 2022-04-21 13:24:31,365 INFO || App info kafka.admin.client for adminclient-7 unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,368 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:31,368 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:31,369 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:31,391 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,391 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,391 INFO || Kafka startTimeMs: 1650547471391 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,394 INFO || Kafka Connect distributed worker initialization took 5028ms [org.apache.kafka.connect.cli.ConnectDistributed] connect_1 | 2022-04-21 13:24:31,394 INFO || Kafka Connect starting [org.apache.kafka.connect.runtime.Connect] connect_1 | 2022-04-21 13:24:31,395 INFO || Initializing REST resources [org.apache.kafka.connect.runtime.rest.RestServer] connect_1 | 2022-04-21 13:24:31,395 INFO || [Worker clientId=connect-1, groupId=1] Herder starting [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:24:31,395 INFO || Worker starting [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:24:31,396 INFO || Starting KafkaOffsetBackingStore [org.apache.kafka.connect.storage.KafkaOffsetBackingStore] connect_1 | 2022-04-21 13:24:31,396 INFO || Starting KafkaBasedLog with topic my_connect_offsets [org.apache.kafka.connect.util.KafkaBasedLog] connect_1 | 2022-04-21 13:24:31,397 INFO || AdminClientConfig values: connect_1 | bootstrap.servers = [kafka:9092] connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = connect_1 | connections.max.idle.ms = 300000 connect_1 | default.api.timeout.ms = 60000 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 2147483647 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,401 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,401 WARN || The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,401 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 WARN || The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:31,402 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,402 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,402 INFO || Kafka startTimeMs: 1650547471402 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,441 INFO || Adding admin resources to main listener [org.apache.kafka.connect.runtime.rest.RestServer] connect_1 | 2022-04-21 13:24:31,481 INFO || ProducerConfig values: connect_1 | acks = -1 connect_1 | batch.size = 16384 connect_1 | bootstrap.servers = [kafka:9092] connect_1 | buffer.memory = 33554432 connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = producer-1 connect_1 | compression.type = none connect_1 | connections.max.idle.ms = 540000 connect_1 | delivery.timeout.ms = 2147483647 connect_1 | enable.idempotence = true connect_1 | interceptor.classes = [] connect_1 | key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer connect_1 | linger.ms = 0 connect_1 | max.block.ms = 60000 connect_1 | max.in.flight.requests.per.connection = 1 connect_1 | max.request.size = 1048576 connect_1 | metadata.max.age.ms = 300000 connect_1 | metadata.max.idle.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner connect_1 | receive.buffer.bytes = 32768 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 2147483647 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | transaction.timeout.ms = 60000 connect_1 | transactional.id = null connect_1 | value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer connect_1 | [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,503 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,503 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,503 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,503 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,503 WARN || The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,503 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,504 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,504 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,504 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,504 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,504 WARN || The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,504 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,504 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,504 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,504 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,504 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,504 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,504 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,504 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,504 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,504 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,504 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,504 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,504 INFO || Kafka startTimeMs: 1650547471504 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,515 INFO || ConsumerConfig values: connect_1 | allow.auto.create.topics = true connect_1 | auto.commit.interval.ms = 5000 connect_1 | auto.offset.reset = earliest connect_1 | bootstrap.servers = [kafka:9092] connect_1 | check.crcs = true connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = consumer-1-1 connect_1 | client.rack = connect_1 | connections.max.idle.ms = 540000 connect_1 | default.api.timeout.ms = 60000 connect_1 | enable.auto.commit = false connect_1 | exclude.internal.topics = true connect_1 | fetch.max.bytes = 52428800 connect_1 | fetch.max.wait.ms = 500 connect_1 | fetch.min.bytes = 1 connect_1 | group.id = 1 connect_1 | group.instance.id = null connect_1 | heartbeat.interval.ms = 3000 connect_1 | interceptor.classes = [] connect_1 | internal.leave.group.on.close = true connect_1 | internal.throw.on.fetch.stable.offset.unsupported = false connect_1 | isolation.level = read_uncommitted connect_1 | key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer connect_1 | max.partition.fetch.bytes = 1048576 connect_1 | max.poll.interval.ms = 300000 connect_1 | max.poll.records = 500 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 45000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,516 INFO || [Producer clientId=producer-1] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,522 INFO || DefaultSessionIdManager workerName=node0 [org.eclipse.jetty.server.session] connect_1 | 2022-04-21 13:24:31,523 INFO || No SessionScavenger set, using defaults [org.eclipse.jetty.server.session] connect_1 | 2022-04-21 13:24:31,525 INFO || node0 Scavenging every 660000ms [org.eclipse.jetty.server.session] connect_1 | 2022-04-21 13:24:31,547 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,547 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,547 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,547 WARN || The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,547 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,548 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,548 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,548 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,548 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,548 WARN || The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,548 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,548 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,548 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,548 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,548 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,548 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,548 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,548 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,548 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,548 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,548 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,548 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,548 INFO || Kafka startTimeMs: 1650547471548 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,561 INFO || [Consumer clientId=consumer-1-1, groupId=1] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,572 INFO || [Consumer clientId=consumer-1-1, groupId=1] Subscribed to partition(s): my_connect_offsets-0, my_connect_offsets-5, my_connect_offsets-10, my_connect_offsets-20, my_connect_offsets-15, my_connect_offsets-9, my_connect_offsets-11, my_connect_offsets-4, my_connect_offsets-16, my_connect_offsets-17, my_connect_offsets-3, my_connect_offsets-24, my_connect_offsets-23, my_connect_offsets-13, my_connect_offsets-18, my_connect_offsets-22, my_connect_offsets-8, my_connect_offsets-2, my_connect_offsets-12, my_connect_offsets-19, my_connect_offsets-14, my_connect_offsets-1, my_connect_offsets-6, my_connect_offsets-7, my_connect_offsets-21 [org.apache.kafka.clients.consumer.KafkaConsumer] connect_1 | 2022-04-21 13:24:31,577 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-0 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,578 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-5 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,578 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-10 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,578 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-20 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,578 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-15 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-9 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-11 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-4 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-16 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-17 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-3 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-24 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-23 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-13 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-18 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-22 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-8 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-2 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-12 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-19 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-14 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-1 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-6 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-7 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,579 INFO || [Consumer clientId=consumer-1-1, groupId=1] Seeking to EARLIEST offset of partition my_connect_offsets-21 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,625 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-0 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,625 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-5 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,625 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-10 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,625 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-20 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,625 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-15 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,625 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-9 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,625 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-11 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,625 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-4 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,625 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-16 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,625 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-17 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,625 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-3 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,625 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-24 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,625 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-23 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,626 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-13 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,626 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-18 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,626 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-22 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,626 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-8 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,626 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-2 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,626 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-12 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,626 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-19 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,626 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-14 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,626 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-1 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,626 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-6 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,626 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-7 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,626 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-21 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,641 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-2 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,642 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-4 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,642 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-6 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,642 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-8 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,642 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,642 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-18 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,642 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-20 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,643 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-22 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,643 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-24 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,643 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-10 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,643 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-12 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,643 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-14 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,643 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-16 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,643 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-3 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,643 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-5 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,644 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-7 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,644 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-9 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,644 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-1 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,644 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-19 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,644 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-21 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,644 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-23 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,644 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-11 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,644 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-13 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,644 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-15 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,644 INFO || [Consumer clientId=consumer-1-1, groupId=1] Resetting offset for partition my_connect_offsets-17 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,704 INFO || Finished reading KafkaBasedLog for topic my_connect_offsets [org.apache.kafka.connect.util.KafkaBasedLog] connect_1 | 2022-04-21 13:24:31,705 INFO || Started KafkaBasedLog for topic my_connect_offsets [org.apache.kafka.connect.util.KafkaBasedLog] connect_1 | 2022-04-21 13:24:31,705 INFO || Finished reading offsets topic and starting KafkaOffsetBackingStore [org.apache.kafka.connect.storage.KafkaOffsetBackingStore] connect_1 | 2022-04-21 13:24:31,708 INFO || Worker started [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:24:31,708 INFO || Starting KafkaBasedLog with topic my_connect_statuses [org.apache.kafka.connect.util.KafkaBasedLog] connect_1 | 2022-04-21 13:24:31,722 INFO || ProducerConfig values: connect_1 | acks = -1 connect_1 | batch.size = 16384 connect_1 | bootstrap.servers = [kafka:9092] connect_1 | buffer.memory = 33554432 connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = producer-2 connect_1 | compression.type = none connect_1 | connections.max.idle.ms = 540000 connect_1 | delivery.timeout.ms = 120000 connect_1 | enable.idempotence = true connect_1 | interceptor.classes = [] connect_1 | key.serializer = class org.apache.kafka.common.serialization.StringSerializer connect_1 | linger.ms = 0 connect_1 | max.block.ms = 60000 connect_1 | max.in.flight.requests.per.connection = 1 connect_1 | max.request.size = 1048576 connect_1 | metadata.max.age.ms = 300000 connect_1 | metadata.max.idle.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner connect_1 | receive.buffer.bytes = 32768 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 0 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | transaction.timeout.ms = 60000 connect_1 | transactional.id = null connect_1 | value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer connect_1 | [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,726 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,727 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,727 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,727 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,727 WARN || The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,727 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,727 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,727 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,727 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,728 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,728 WARN || The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,728 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,728 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,728 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,728 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,728 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,728 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,728 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,728 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,728 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,728 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,728 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,728 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,728 INFO || Kafka startTimeMs: 1650547471728 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,730 INFO || ConsumerConfig values: connect_1 | allow.auto.create.topics = true connect_1 | auto.commit.interval.ms = 5000 connect_1 | auto.offset.reset = earliest connect_1 | bootstrap.servers = [kafka:9092] connect_1 | check.crcs = true connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = consumer-1-2 connect_1 | client.rack = connect_1 | connections.max.idle.ms = 540000 connect_1 | default.api.timeout.ms = 60000 connect_1 | enable.auto.commit = false connect_1 | exclude.internal.topics = true connect_1 | fetch.max.bytes = 52428800 connect_1 | fetch.max.wait.ms = 500 connect_1 | fetch.min.bytes = 1 connect_1 | group.id = 1 connect_1 | group.instance.id = null connect_1 | heartbeat.interval.ms = 3000 connect_1 | interceptor.classes = [] connect_1 | internal.leave.group.on.close = true connect_1 | internal.throw.on.fetch.stable.offset.unsupported = false connect_1 | isolation.level = read_uncommitted connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | max.partition.fetch.bytes = 1048576 connect_1 | max.poll.interval.ms = 300000 connect_1 | max.poll.records = 500 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 45000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,734 INFO || [Producer clientId=producer-2] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,737 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,738 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,738 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,738 WARN || The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,738 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,738 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,738 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,738 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,738 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,738 WARN || The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,738 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,738 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,738 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,738 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,738 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,739 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,739 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,739 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,739 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,739 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,739 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,739 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,739 INFO || Kafka startTimeMs: 1650547471739 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,748 INFO || [Consumer clientId=consumer-1-2, groupId=1] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,750 INFO || [Consumer clientId=consumer-1-2, groupId=1] Subscribed to partition(s): my_connect_statuses-0, my_connect_statuses-1, my_connect_statuses-4, my_connect_statuses-2, my_connect_statuses-3 [org.apache.kafka.clients.consumer.KafkaConsumer] connect_1 | 2022-04-21 13:24:31,751 INFO || [Consumer clientId=consumer-1-2, groupId=1] Seeking to EARLIEST offset of partition my_connect_statuses-0 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,751 INFO || [Consumer clientId=consumer-1-2, groupId=1] Seeking to EARLIEST offset of partition my_connect_statuses-1 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,751 INFO || [Consumer clientId=consumer-1-2, groupId=1] Seeking to EARLIEST offset of partition my_connect_statuses-4 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,751 INFO || [Consumer clientId=consumer-1-2, groupId=1] Seeking to EARLIEST offset of partition my_connect_statuses-2 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,751 INFO || [Consumer clientId=consumer-1-2, groupId=1] Seeking to EARLIEST offset of partition my_connect_statuses-3 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,767 INFO || [Consumer clientId=consumer-1-2, groupId=1] Resetting the last seen epoch of partition my_connect_statuses-0 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,767 INFO || [Consumer clientId=consumer-1-2, groupId=1] Resetting the last seen epoch of partition my_connect_statuses-1 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,767 INFO || [Consumer clientId=consumer-1-2, groupId=1] Resetting the last seen epoch of partition my_connect_statuses-4 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,767 INFO || [Consumer clientId=consumer-1-2, groupId=1] Resetting the last seen epoch of partition my_connect_statuses-2 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,767 INFO || [Consumer clientId=consumer-1-2, groupId=1] Resetting the last seen epoch of partition my_connect_statuses-3 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,772 INFO || [Consumer clientId=consumer-1-2, groupId=1] Resetting offset for partition my_connect_statuses-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,772 INFO || [Consumer clientId=consumer-1-2, groupId=1] Resetting offset for partition my_connect_statuses-1 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,772 INFO || [Consumer clientId=consumer-1-2, groupId=1] Resetting offset for partition my_connect_statuses-2 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,772 INFO || [Consumer clientId=consumer-1-2, groupId=1] Resetting offset for partition my_connect_statuses-3 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,772 INFO || [Consumer clientId=consumer-1-2, groupId=1] Resetting offset for partition my_connect_statuses-4 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,812 INFO || Finished reading KafkaBasedLog for topic my_connect_statuses [org.apache.kafka.connect.util.KafkaBasedLog] connect_1 | 2022-04-21 13:24:31,812 INFO || Started KafkaBasedLog for topic my_connect_statuses [org.apache.kafka.connect.util.KafkaBasedLog] connect_1 | 2022-04-21 13:24:31,819 INFO || Starting KafkaConfigBackingStore [org.apache.kafka.connect.storage.KafkaConfigBackingStore] connect_1 | 2022-04-21 13:24:31,819 INFO || Starting KafkaBasedLog with topic my_connect_configs [org.apache.kafka.connect.util.KafkaBasedLog] connect_1 | 2022-04-21 13:24:31,836 INFO || ProducerConfig values: connect_1 | acks = -1 connect_1 | batch.size = 16384 connect_1 | bootstrap.servers = [kafka:9092] connect_1 | buffer.memory = 33554432 connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = producer-3 connect_1 | compression.type = none connect_1 | connections.max.idle.ms = 540000 connect_1 | delivery.timeout.ms = 2147483647 connect_1 | enable.idempotence = true connect_1 | interceptor.classes = [] connect_1 | key.serializer = class org.apache.kafka.common.serialization.StringSerializer connect_1 | linger.ms = 0 connect_1 | max.block.ms = 60000 connect_1 | max.in.flight.requests.per.connection = 1 connect_1 | max.request.size = 1048576 connect_1 | metadata.max.age.ms = 300000 connect_1 | metadata.max.idle.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner connect_1 | receive.buffer.bytes = 32768 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 2147483647 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | transaction.timeout.ms = 60000 connect_1 | transactional.id = null connect_1 | value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer connect_1 | [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,841 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:31,842 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,842 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,842 INFO || Kafka startTimeMs: 1650547471841 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,843 INFO || ConsumerConfig values: connect_1 | allow.auto.create.topics = true connect_1 | auto.commit.interval.ms = 5000 connect_1 | auto.offset.reset = earliest connect_1 | bootstrap.servers = [kafka:9092] connect_1 | check.crcs = true connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = consumer-1-3 connect_1 | client.rack = connect_1 | connections.max.idle.ms = 540000 connect_1 | default.api.timeout.ms = 60000 connect_1 | enable.auto.commit = false connect_1 | exclude.internal.topics = true connect_1 | fetch.max.bytes = 52428800 connect_1 | fetch.max.wait.ms = 500 connect_1 | fetch.min.bytes = 1 connect_1 | group.id = 1 connect_1 | group.instance.id = null connect_1 | heartbeat.interval.ms = 3000 connect_1 | interceptor.classes = [] connect_1 | internal.leave.group.on.close = true connect_1 | internal.throw.on.fetch.stable.offset.unsupported = false connect_1 | isolation.level = read_uncommitted connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | max.partition.fetch.bytes = 1048576 connect_1 | max.poll.interval.ms = 300000 connect_1 | max.poll.records = 500 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 45000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,846 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,846 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,846 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,846 WARN || The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,846 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,846 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,846 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,846 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,846 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,846 WARN || The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,846 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,846 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,846 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,846 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,846 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,847 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,847 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,847 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,847 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,847 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:31,847 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,847 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,847 INFO || Kafka startTimeMs: 1650547471847 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:31,848 INFO || [Producer clientId=producer-3] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,855 INFO || [Consumer clientId=consumer-1-3, groupId=1] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,858 INFO || [Consumer clientId=consumer-1-3, groupId=1] Subscribed to partition(s): my_connect_configs-0 [org.apache.kafka.clients.consumer.KafkaConsumer] connect_1 | 2022-04-21 13:24:31,858 INFO || [Consumer clientId=consumer-1-3, groupId=1] Seeking to EARLIEST offset of partition my_connect_configs-0 [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,870 INFO || [Consumer clientId=consumer-1-3, groupId=1] Resetting the last seen epoch of partition my_connect_configs-0 to 0 since the associated topicId changed from null to fML0POG8THqTjJyfWIg5aQ [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,873 INFO || [Consumer clientId=consumer-1-3, groupId=1] Resetting offset for partition my_connect_configs-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:31,884 INFO || Finished reading KafkaBasedLog for topic my_connect_configs [org.apache.kafka.connect.util.KafkaBasedLog] connect_1 | 2022-04-21 13:24:31,885 INFO || Started KafkaBasedLog for topic my_connect_configs [org.apache.kafka.connect.util.KafkaBasedLog] connect_1 | 2022-04-21 13:24:31,885 INFO || Started KafkaConfigBackingStore [org.apache.kafka.connect.storage.KafkaConfigBackingStore] connect_1 | 2022-04-21 13:24:31,885 INFO || [Worker clientId=connect-1, groupId=1] Herder started [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:24:31,898 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_configs-0 to 0 since the associated topicId changed from null to fML0POG8THqTjJyfWIg5aQ [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,898 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_statuses-0 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,898 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_statuses-1 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,898 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_statuses-4 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,898 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_statuses-2 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,898 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_statuses-3 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,898 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition contact_db-0 to 0 since the associated topicId changed from null to YoWfqSw9QnGTn7k_WJmAGg [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,898 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition contact_db.schema-changes-0 to 0 since the associated topicId changed from null to I-6FAdvZT2KHBSXH0d61iA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,898 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition contact.debezium.changes-0 to 0 since the associated topicId changed from null to 0RlDmGYWRauQMKie6FJ1CA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,898 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-0 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,898 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-5 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,898 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-10 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,899 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-20 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,899 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-15 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,899 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-9 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,899 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-11 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,899 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-4 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,899 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-16 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,899 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-17 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,900 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-3 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,900 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-24 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,900 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-23 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,900 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-13 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,900 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-18 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,900 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-22 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,900 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-8 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,900 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-2 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,900 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-12 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,900 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-19 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,900 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-14 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,900 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-1 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,900 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-6 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,900 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-7 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,900 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition my_connect_offsets-21 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,901 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-0 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,901 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-10 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,901 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-20 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,901 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-40 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,901 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-30 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,901 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-9 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,901 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-11 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,901 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-31 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,901 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-39 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,901 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-13 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,901 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-18 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,901 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-22 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,901 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-8 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,901 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-32 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,901 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-43 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,901 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-29 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,901 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-34 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,902 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-1 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,902 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-6 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,902 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-41 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,902 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-27 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,902 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-48 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,902 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-5 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,902 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-15 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,902 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-35 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,902 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-25 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,902 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-46 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,902 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-26 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,902 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-36 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,902 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-44 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,902 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-16 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,903 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-37 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,903 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-17 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,903 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-45 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,903 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-3 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,903 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-24 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,903 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-38 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,903 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-33 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,903 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-23 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,903 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-28 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,903 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-2 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,903 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-12 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,903 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-19 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,903 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-14 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,903 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-4 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,903 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-47 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,904 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-49 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,904 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-42 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,904 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-7 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,904 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition __consumer_offsets-21 to 0 since the associated topicId changed from null to Dv2RfarfQ8Osn4vXaQ1KOw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,906 INFO || [Worker clientId=connect-1, groupId=1] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:31,907 INFO || [Worker clientId=connect-1, groupId=1] Discovered group coordinator 172.19.0.4:9092 (id: 2147483646 rack: null) [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] connect_1 | 2022-04-21 13:24:31,910 INFO || [Worker clientId=connect-1, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] connect_1 | 2022-04-21 13:24:31,911 INFO || [Worker clientId=connect-1, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] kafka_1 | 2022-04-21 13:24:31,930 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Dynamic member with unknown member id joins group 1 in Empty state. Created a new member id connect-1-dde6a537-3dba-4000-9646-5d73d3d1e367 and request the member to rejoin with this id. connect_1 | 2022-04-21 13:24:31,935 INFO || [Worker clientId=connect-1, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] kafka_1 | 2022-04-21 13:24:31,943 - INFO [data-plane-kafka-request-handler-0:Logging@66] - [GroupCoordinator 1]: Preparing to rebalance group 1 in state PreparingRebalance with old generation 8 (__consumer_offsets-49) (reason: Adding new member connect-1-dde6a537-3dba-4000-9646-5d73d3d1e367 with group instance id None) kafka_1 | 2022-04-21 13:24:31,949 - INFO [executor-Rebalance:Logging@66] - [GroupCoordinator 1]: Stabilized group 1 generation 9 (__consumer_offsets-49) with 1 members connect_1 | 2022-04-21 13:24:31,954 INFO || [Worker clientId=connect-1, groupId=1] Successfully joined group with generation Generation{generationId=9, memberId='connect-1-dde6a537-3dba-4000-9646-5d73d3d1e367', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] connect_1 | Apr 21, 2022 1:24:31 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime connect_1 | WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.RootResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.RootResource will be ignored. connect_1 | Apr 21, 2022 1:24:31 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime connect_1 | WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource will be ignored. connect_1 | Apr 21, 2022 1:24:31 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime connect_1 | WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource will be ignored. connect_1 | Apr 21, 2022 1:24:31 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime connect_1 | WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.LoggingResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.LoggingResource will be ignored. kafka_1 | 2022-04-21 13:24:31,983 - INFO [data-plane-kafka-request-handler-4:Logging@66] - [GroupCoordinator 1]: Assignment received from leader connect-1-dde6a537-3dba-4000-9646-5d73d3d1e367 for group 1 for generation 9. The group has 1 members, 0 of which are static. connect_1 | 2022-04-21 13:24:32,038 INFO || [Worker clientId=connect-1, groupId=1] Successfully synced group in generation Generation{generationId=9, memberId='connect-1-dde6a537-3dba-4000-9646-5d73d3d1e367', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] connect_1 | 2022-04-21 13:24:32,040 INFO || [Worker clientId=connect-1, groupId=1] Joined group at generation 9 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-dde6a537-3dba-4000-9646-5d73d3d1e367', leaderUrl='http://172.19.0.5:8083/', offset=6, connectorIds=[kafka-contact-connector], taskIds=[kafka-contact-connector-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:24:32,042 WARN || [Worker clientId=connect-1, groupId=1] Catching up to assignment's config offset. [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:24:32,042 INFO || [Worker clientId=connect-1, groupId=1] Current config state offset -1 is behind group assignment 6, reading to end of config log [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:24:32,049 INFO || [Worker clientId=connect-1, groupId=1] Finished reading to end of log and updated config snapshot, new config log offset: 6 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:24:32,049 INFO || [Worker clientId=connect-1, groupId=1] Starting connectors and tasks using config offset 6 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:24:32,051 INFO || [Worker clientId=connect-1, groupId=1] Starting task kafka-contact-connector-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:24:32,052 INFO || [Worker clientId=connect-1, groupId=1] Starting connector kafka-contact-connector [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:24:32,057 INFO || Creating task kafka-contact-connector-0 [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:24:32,057 INFO || Creating connector kafka-contact-connector of type io.debezium.connector.mysql.MySqlConnector [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:24:32,063 INFO || ConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | transforms = [Reroute] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig] connect_1 | 2022-04-21 13:24:32,064 INFO || SourceConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig] connect_1 | 2022-04-21 13:24:32,081 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | transforms = [Reroute] connect_1 | transforms.Reroute.key.enforce.uniqueness = true connect_1 | transforms.Reroute.key.field.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.key.field.replacement = $1 connect_1 | transforms.Reroute.negate = false connect_1 | transforms.Reroute.predicate = connect_1 | transforms.Reroute.topic.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.topic.replacement = contact.debezium.changes connect_1 | transforms.Reroute.type = class io.debezium.transforms.ByLogicalTableRouter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 13:24:32,081 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | transforms.Reroute.key.enforce.uniqueness = true connect_1 | transforms.Reroute.key.field.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.key.field.replacement = $1 connect_1 | transforms.Reroute.negate = false connect_1 | transforms.Reroute.predicate = connect_1 | transforms.Reroute.topic.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.topic.replacement = contact.debezium.changes connect_1 | transforms.Reroute.type = class io.debezium.transforms.ByLogicalTableRouter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 13:24:32,089 INFO || EnrichedSourceConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.default.exclude = [] connect_1 | topic.creation.default.include = [.*] connect_1 | topic.creation.default.partitions = 1 connect_1 | topic.creation.default.replication.factor = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig] connect_1 | 2022-04-21 13:24:32,090 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.default.exclude = [] connect_1 | topic.creation.default.include = [.*] connect_1 | topic.creation.default.partitions = 1 connect_1 | topic.creation.default.replication.factor = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | transforms.Reroute.key.enforce.uniqueness = true connect_1 | transforms.Reroute.key.field.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.key.field.replacement = $1 connect_1 | transforms.Reroute.negate = false connect_1 | transforms.Reroute.predicate = connect_1 | transforms.Reroute.topic.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.topic.replacement = contact.debezium.changes connect_1 | transforms.Reroute.type = class io.debezium.transforms.ByLogicalTableRouter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 13:24:32,093 INFO || TaskConfig values: connect_1 | task.class = class io.debezium.connector.mysql.MySqlConnectorTask connect_1 | [org.apache.kafka.connect.runtime.TaskConfig] connect_1 | 2022-04-21 13:24:32,095 INFO || Instantiated task kafka-contact-connector-0 with version 1.9.0.Final of type io.debezium.connector.mysql.MySqlConnectorTask [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:24:32,096 INFO || Instantiated connector kafka-contact-connector with version 1.9.0.Final of type class io.debezium.connector.mysql.MySqlConnector [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:24:32,096 INFO || JsonConverterConfig values: connect_1 | converter.type = key connect_1 | decimal.format = BASE64 connect_1 | schemas.cache.size = 1000 connect_1 | schemas.enable = false connect_1 | [org.apache.kafka.connect.json.JsonConverterConfig] connect_1 | 2022-04-21 13:24:32,097 INFO || JsonConverterConfig values: connect_1 | converter.type = value connect_1 | decimal.format = BASE64 connect_1 | schemas.cache.size = 1000 connect_1 | schemas.enable = false connect_1 | [org.apache.kafka.connect.json.JsonConverterConfig] connect_1 | 2022-04-21 13:24:32,097 INFO || Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task kafka-contact-connector-0 using the connector config [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:24:32,097 INFO || Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task kafka-contact-connector-0 using the connector config [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:24:32,097 INFO || Finished creating connector kafka-contact-connector [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:24:32,098 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task kafka-contact-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:24:32,104 INFO || SourceConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig] connect_1 | 2022-04-21 13:24:32,106 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | transforms.Reroute.key.enforce.uniqueness = true connect_1 | transforms.Reroute.key.field.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.key.field.replacement = $1 connect_1 | transforms.Reroute.negate = false connect_1 | transforms.Reroute.predicate = connect_1 | transforms.Reroute.topic.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.topic.replacement = contact.debezium.changes connect_1 | transforms.Reroute.type = class io.debezium.transforms.ByLogicalTableRouter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 13:24:32,107 INFO || EnrichedSourceConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.default.exclude = [] connect_1 | topic.creation.default.include = [.*] connect_1 | topic.creation.default.partitions = 1 connect_1 | topic.creation.default.replication.factor = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig] connect_1 | 2022-04-21 13:24:32,108 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.default.exclude = [] connect_1 | topic.creation.default.include = [.*] connect_1 | topic.creation.default.partitions = 1 connect_1 | topic.creation.default.replication.factor = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | transforms.Reroute.key.enforce.uniqueness = true connect_1 | transforms.Reroute.key.field.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.key.field.replacement = $1 connect_1 | transforms.Reroute.negate = false connect_1 | transforms.Reroute.predicate = connect_1 | transforms.Reroute.topic.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.topic.replacement = contact.debezium.changes connect_1 | transforms.Reroute.type = class io.debezium.transforms.ByLogicalTableRouter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 13:24:32,119 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{io.debezium.transforms.ByLogicalTableRouter} [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:24:32,123 INFO || ProducerConfig values: connect_1 | acks = -1 connect_1 | batch.size = 16384 connect_1 | bootstrap.servers = [kafka:9092] connect_1 | buffer.memory = 33554432 connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = connector-producer-kafka-contact-connector-0 connect_1 | compression.type = none connect_1 | connections.max.idle.ms = 540000 connect_1 | delivery.timeout.ms = 2147483647 connect_1 | enable.idempotence = true connect_1 | interceptor.classes = [] connect_1 | key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer connect_1 | linger.ms = 0 connect_1 | max.block.ms = 9223372036854775807 connect_1 | max.in.flight.requests.per.connection = 1 connect_1 | max.request.size = 1048576 connect_1 | metadata.max.age.ms = 300000 connect_1 | metadata.max.idle.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner connect_1 | receive.buffer.bytes = 32768 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 2147483647 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | transaction.timeout.ms = 60000 connect_1 | transactional.id = null connect_1 | value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer connect_1 | [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:32,124 INFO || [Producer clientId=producer-2] Resetting the last seen epoch of partition my_connect_statuses-0 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:32,125 INFO || [Producer clientId=producer-2] Resetting the last seen epoch of partition my_connect_statuses-1 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:32,125 INFO || [Producer clientId=producer-2] Resetting the last seen epoch of partition my_connect_statuses-4 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:32,125 INFO || [Producer clientId=producer-2] Resetting the last seen epoch of partition my_connect_statuses-2 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:32,125 INFO || [Producer clientId=producer-2] Resetting the last seen epoch of partition my_connect_statuses-3 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:32,128 WARN || The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:32,129 WARN || The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:32,129 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,129 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,129 INFO || Kafka startTimeMs: 1650547472129 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,131 INFO || AdminClientConfig values: connect_1 | bootstrap.servers = [kafka:9092] connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = connector-adminclient-kafka-contact-connector-0 connect_1 | connections.max.idle.ms = 300000 connect_1 | default.api.timeout.ms = 60000 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 2147483647 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,133 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,134 WARN || The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,134 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,134 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,134 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,134 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,135 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,135 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,135 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,135 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,135 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,135 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,135 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,135 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,136 WARN || The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,136 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,136 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,136 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,136 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,136 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,136 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,137 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,137 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,137 INFO || Kafka startTimeMs: 1650547472136 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,138 INFO || [Producer clientId=connector-producer-kafka-contact-connector-0] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:32,151 INFO || [Worker clientId=connect-1, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:24:32,159 INFO || [Producer clientId=producer-3] Resetting the last seen epoch of partition my_connect_configs-0 to 0 since the associated topicId changed from null to fML0POG8THqTjJyfWIg5aQ [org.apache.kafka.clients.Metadata] connect_1 | Apr 21, 2022 1:24:32 PM org.glassfish.jersey.internal.Errors logErrors connect_1 | WARNING: The following warnings have been detected: WARNING: The (sub)resource method listLoggers in org.apache.kafka.connect.runtime.rest.resources.LoggingResource contains empty path annotation. connect_1 | WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation. connect_1 | WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation. connect_1 | WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource contains empty path annotation. connect_1 | WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation. connect_1 | connect_1 | 2022-04-21 13:24:32,174 INFO || Started o.e.j.s.ServletContextHandler@1b30a54e{/,null,AVAILABLE} [org.eclipse.jetty.server.handler.ContextHandler] connect_1 | 2022-04-21 13:24:32,174 INFO || REST resources initialized; server is started and ready to handle requests [org.apache.kafka.connect.runtime.rest.RestServer] connect_1 | 2022-04-21 13:24:32,174 INFO || Kafka Connect started [org.apache.kafka.connect.runtime.Connect] connect_1 | 2022-04-21 13:24:32,190 INFO || [Worker clientId=connect-1, groupId=1] Session key updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:24:32,194 INFO || SourceConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig] connect_1 | 2022-04-21 13:24:32,210 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | transforms.Reroute.key.enforce.uniqueness = true connect_1 | transforms.Reroute.key.field.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.key.field.replacement = $1 connect_1 | transforms.Reroute.negate = false connect_1 | transforms.Reroute.predicate = connect_1 | transforms.Reroute.topic.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.topic.replacement = contact.debezium.changes connect_1 | transforms.Reroute.type = class io.debezium.transforms.ByLogicalTableRouter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 13:24:32,211 INFO || EnrichedSourceConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.default.exclude = [] connect_1 | topic.creation.default.include = [.*] connect_1 | topic.creation.default.partitions = 1 connect_1 | topic.creation.default.replication.factor = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig] connect_1 | 2022-04-21 13:24:32,211 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.default.exclude = [] connect_1 | topic.creation.default.include = [.*] connect_1 | topic.creation.default.partitions = 1 connect_1 | topic.creation.default.replication.factor = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | transforms.Reroute.key.enforce.uniqueness = true connect_1 | transforms.Reroute.key.field.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.key.field.replacement = $1 connect_1 | transforms.Reroute.negate = false connect_1 | transforms.Reroute.predicate = connect_1 | transforms.Reroute.topic.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.topic.replacement = contact.debezium.changes connect_1 | transforms.Reroute.type = class io.debezium.transforms.ByLogicalTableRouter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 13:24:32,227 INFO || Starting MySqlConnectorTask with configuration: [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,228 INFO || connector.class = io.debezium.connector.mysql.MySqlConnector [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,228 INFO || snapshot.locking.mode = none [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,228 INFO || topic.creation.default.partitions = 1 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,228 INFO || tasks.max = 1 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,228 INFO || database.history.kafka.topic = contact_db.schema-changes [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,228 INFO || transforms.Reroute.key.field.name = universe [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,228 INFO || transforms = Reroute [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,228 INFO || include.schema.changes = true [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || transforms.Reroute.topic.replacement = contact.debezium.changes [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || topic.creation.default.replication.factor = 1 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || database.history.store.only.captured.tables.ddl = true [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || value.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || key.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || database.user = root [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || transforms.Reroute.type = io.debezium.transforms.ByLogicalTableRouter [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || database.server.id = 438567 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || topic.creation.default.cleanup.policy = compact [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || database.history.kafka.bootstrap.servers = 172.19.0.4:9092 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || database.server.name = contact_db [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || transforms.Reroute.topic.regex = contact_db\.splio3_data\.(.*) [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || database.port = 3306 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || transforms.Reroute.key.field.replacement = $1 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || key.converter.schemas.enable = false [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || column.exclude.list = .*\.last_.*,.*\.nouverts.*,.*\.nclicks.*,.*\.nenvois,.*\.nbounces,.*\.nbounces_sms,.*\.nclickssms,.*\.nx,.*\.nsms,.*\.nclicksms,.*\.ntransfo,.*\.npurchases,.*\.sommeca [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || database.serverTimezone = Europe/Paris [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || task.class = io.debezium.connector.mysql.MySqlConnectorTask [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || database.hostname = 172.19.0.3 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || database.connectionTimeZone = Europe/Paris [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || database.password = ******** [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || value.converter.schemas.enable = false [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || name = kafka-contact-connector [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || transforms.Reroute.key.field.regex = contact_db\.splio3_data\.(.*) [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || table.include.list = splio3_data\.(.*) [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || database.include.list = splio3_data [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,229 INFO || snapshot.mode = schema_only [io.debezium.connector.common.BaseSourceTask] mysql_1 | mbind: Operation not permitted connect_1 | 2022-04-21 13:24:32,468 INFO || Found previous partition offset MySqlPartition [sourcePartition={server=contact_db}]: {transaction_id=null, file=mysql-bin.000004, pos=9138, row=1, event=2} [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:24:32,522 INFO || KafkaDatabaseHistory Consumer config: {key.deserializer=org.apache.kafka.common.serialization.StringDeserializer, value.deserializer=org.apache.kafka.common.serialization.StringDeserializer, enable.auto.commit=false, group.id=contact_db-dbhistory, bootstrap.servers=172.19.0.4:9092, fetch.min.bytes=1, session.timeout.ms=10000, auto.offset.reset=earliest, client.id=contact_db-dbhistory} [io.debezium.relational.history.KafkaDatabaseHistory] connect_1 | 2022-04-21 13:24:32,522 INFO || KafkaDatabaseHistory Producer config: {retries=1, value.serializer=org.apache.kafka.common.serialization.StringSerializer, acks=1, batch.size=32768, max.block.ms=10000, bootstrap.servers=172.19.0.4:9092, buffer.memory=1048576, key.serializer=org.apache.kafka.common.serialization.StringSerializer, client.id=contact_db-dbhistory, linger.ms=0} [io.debezium.relational.history.KafkaDatabaseHistory] connect_1 | 2022-04-21 13:24:32,523 INFO || Requested thread factory for connector MySqlConnector, id = contact_db named = db-history-config-check [io.debezium.util.Threads] connect_1 | 2022-04-21 13:24:32,525 INFO || ProducerConfig values: connect_1 | acks = 1 connect_1 | batch.size = 32768 connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | buffer.memory = 1048576 connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory connect_1 | compression.type = none connect_1 | connections.max.idle.ms = 540000 connect_1 | delivery.timeout.ms = 120000 connect_1 | enable.idempotence = true connect_1 | interceptor.classes = [] connect_1 | key.serializer = class org.apache.kafka.common.serialization.StringSerializer connect_1 | linger.ms = 0 connect_1 | max.block.ms = 10000 connect_1 | max.in.flight.requests.per.connection = 5 connect_1 | max.request.size = 1048576 connect_1 | metadata.max.age.ms = 300000 connect_1 | metadata.max.idle.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner connect_1 | receive.buffer.bytes = 32768 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 1 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | transaction.timeout.ms = 60000 connect_1 | transactional.id = null connect_1 | value.serializer = class org.apache.kafka.common.serialization.StringSerializer connect_1 | [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:24:32,529 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,529 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,529 INFO || Kafka startTimeMs: 1650547472529 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,538 INFO || [Producer clientId=contact_db-dbhistory] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:32,559 INFO || Closing connection before starting schema recovery [io.debezium.connector.mysql.MySqlConnectorTask] connect_1 | 2022-04-21 13:24:32,572 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] connect_1 | 2022-04-21 13:24:32,574 INFO || ConsumerConfig values: connect_1 | allow.auto.create.topics = true connect_1 | auto.commit.interval.ms = 5000 connect_1 | auto.offset.reset = earliest connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | check.crcs = true connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory connect_1 | client.rack = connect_1 | connections.max.idle.ms = 540000 connect_1 | default.api.timeout.ms = 60000 connect_1 | enable.auto.commit = false connect_1 | exclude.internal.topics = true connect_1 | fetch.max.bytes = 52428800 connect_1 | fetch.max.wait.ms = 500 connect_1 | fetch.min.bytes = 1 connect_1 | group.id = contact_db-dbhistory connect_1 | group.instance.id = null connect_1 | heartbeat.interval.ms = 3000 connect_1 | interceptor.classes = [] connect_1 | internal.leave.group.on.close = true connect_1 | internal.throw.on.fetch.stable.offset.unsupported = false connect_1 | isolation.level = read_uncommitted connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | max.partition.fetch.bytes = 1048576 connect_1 | max.poll.interval.ms = 300000 connect_1 | max.poll.records = 500 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 10000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:32,581 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,581 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,581 INFO || Kafka startTimeMs: 1650547472581 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,588 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:32,592 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting generation due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:24:32,592 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:24:32,593 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:32,593 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:32,593 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:32,594 INFO || App info kafka.consumer for contact_db-dbhistory unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,595 INFO || ConsumerConfig values: connect_1 | allow.auto.create.topics = true connect_1 | auto.commit.interval.ms = 5000 connect_1 | auto.offset.reset = earliest connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | check.crcs = true connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory connect_1 | client.rack = connect_1 | connections.max.idle.ms = 540000 connect_1 | default.api.timeout.ms = 60000 connect_1 | enable.auto.commit = false connect_1 | exclude.internal.topics = true connect_1 | fetch.max.bytes = 52428800 connect_1 | fetch.max.wait.ms = 500 connect_1 | fetch.min.bytes = 1 connect_1 | group.id = contact_db-dbhistory connect_1 | group.instance.id = null connect_1 | heartbeat.interval.ms = 3000 connect_1 | interceptor.classes = [] connect_1 | internal.leave.group.on.close = true connect_1 | internal.throw.on.fetch.stable.offset.unsupported = false connect_1 | isolation.level = read_uncommitted connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | max.partition.fetch.bytes = 1048576 connect_1 | max.poll.interval.ms = 300000 connect_1 | max.poll.records = 500 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 10000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:32,597 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,598 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,598 INFO || Kafka startTimeMs: 1650547472597 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,598 INFO || Creating thread debezium-mysqlconnector-contact_db-db-history-config-check [io.debezium.util.Threads] connect_1 | 2022-04-21 13:24:32,600 INFO || AdminClientConfig values: connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory-topic-check connect_1 | connections.max.idle.ms = 300000 connect_1 | default.api.timeout.ms = 60000 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 1 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,604 WARN || The configuration 'value.serializer' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,604 WARN || The configuration 'acks' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,604 WARN || The configuration 'batch.size' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,604 WARN || The configuration 'max.block.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,604 WARN || The configuration 'buffer.memory' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,604 WARN || The configuration 'key.serializer' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,604 WARN || The configuration 'linger.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:24:32,604 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,604 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,604 INFO || Kafka startTimeMs: 1650547472604 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,610 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting the last seen epoch of partition contact_db.schema-changes-0 to 0 since the associated topicId changed from null to I-6FAdvZT2KHBSXH0d61iA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:32,610 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:32,621 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting generation due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:24:32,621 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:24:32,621 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:32,621 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:32,621 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:32,623 INFO || App info kafka.consumer for contact_db-dbhistory unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,624 INFO || ConsumerConfig values: connect_1 | allow.auto.create.topics = true connect_1 | auto.commit.interval.ms = 5000 connect_1 | auto.offset.reset = earliest connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | check.crcs = true connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory connect_1 | client.rack = connect_1 | connections.max.idle.ms = 540000 connect_1 | default.api.timeout.ms = 60000 connect_1 | enable.auto.commit = false connect_1 | exclude.internal.topics = true connect_1 | fetch.max.bytes = 52428800 connect_1 | fetch.max.wait.ms = 500 connect_1 | fetch.min.bytes = 1 connect_1 | group.id = contact_db-dbhistory connect_1 | group.instance.id = null connect_1 | heartbeat.interval.ms = 3000 connect_1 | interceptor.classes = [] connect_1 | internal.leave.group.on.close = true connect_1 | internal.throw.on.fetch.stable.offset.unsupported = false connect_1 | isolation.level = read_uncommitted connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | max.partition.fetch.bytes = 1048576 connect_1 | max.poll.interval.ms = 300000 connect_1 | max.poll.records = 500 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 10000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:32,627 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,627 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,627 INFO || Kafka startTimeMs: 1650547472627 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,631 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:32,632 INFO || Database history topic 'contact_db.schema-changes' has correct settings [io.debezium.relational.history.KafkaDatabaseHistory] connect_1 | 2022-04-21 13:24:32,633 INFO || App info kafka.admin.client for contact_db-dbhistory-topic-check unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,635 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting generation due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:24:32,635 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:24:32,635 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:32,635 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:32,635 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:32,637 INFO || App info kafka.consumer for contact_db-dbhistory unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,637 INFO || ConsumerConfig values: connect_1 | allow.auto.create.topics = true connect_1 | auto.commit.interval.ms = 5000 connect_1 | auto.offset.reset = earliest connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | check.crcs = true connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory connect_1 | client.rack = connect_1 | connections.max.idle.ms = 540000 connect_1 | default.api.timeout.ms = 60000 connect_1 | enable.auto.commit = false connect_1 | exclude.internal.topics = true connect_1 | fetch.max.bytes = 52428800 connect_1 | fetch.max.wait.ms = 500 connect_1 | fetch.min.bytes = 1 connect_1 | group.id = contact_db-dbhistory connect_1 | group.instance.id = null connect_1 | heartbeat.interval.ms = 3000 connect_1 | interceptor.classes = [] connect_1 | internal.leave.group.on.close = true connect_1 | internal.throw.on.fetch.stable.offset.unsupported = false connect_1 | isolation.level = read_uncommitted connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | max.partition.fetch.bytes = 1048576 connect_1 | max.poll.interval.ms = 300000 connect_1 | max.poll.records = 500 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 10000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:32,638 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:32,638 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:32,639 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:32,642 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,642 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,642 INFO || Kafka startTimeMs: 1650547472642 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,649 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting the last seen epoch of partition contact_db.schema-changes-0 to 0 since the associated topicId changed from null to I-6FAdvZT2KHBSXH0d61iA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:32,649 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:32,657 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting generation due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:24:32,657 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:24:32,658 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:32,658 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:32,658 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:32,660 INFO || App info kafka.consumer for contact_db-dbhistory unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,660 INFO || Started database history recovery [io.debezium.relational.history.DatabaseHistoryMetrics] connect_1 | 2022-04-21 13:24:32,672 INFO || ConsumerConfig values: connect_1 | allow.auto.create.topics = true connect_1 | auto.commit.interval.ms = 5000 connect_1 | auto.offset.reset = earliest connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | check.crcs = true connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory connect_1 | client.rack = connect_1 | connections.max.idle.ms = 540000 connect_1 | default.api.timeout.ms = 60000 connect_1 | enable.auto.commit = false connect_1 | exclude.internal.topics = true connect_1 | fetch.max.bytes = 52428800 connect_1 | fetch.max.wait.ms = 500 connect_1 | fetch.min.bytes = 1 connect_1 | group.id = contact_db-dbhistory connect_1 | group.instance.id = null connect_1 | heartbeat.interval.ms = 3000 connect_1 | interceptor.classes = [] connect_1 | internal.leave.group.on.close = true connect_1 | internal.throw.on.fetch.stable.offset.unsupported = false connect_1 | isolation.level = read_uncommitted connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | max.partition.fetch.bytes = 1048576 connect_1 | max.poll.interval.ms = 300000 connect_1 | max.poll.records = 500 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 10000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:24:32,674 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,674 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,674 INFO || Kafka startTimeMs: 1650547472674 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:32,675 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Subscribed to topic(s): contact_db.schema-changes [org.apache.kafka.clients.consumer.KafkaConsumer] connect_1 | 2022-04-21 13:24:32,680 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting the last seen epoch of partition contact_db.schema-changes-0 to 0 since the associated topicId changed from null to I-6FAdvZT2KHBSXH0d61iA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:32,680 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:24:32,689 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Discovered group coordinator 172.19.0.4:9092 (id: 2147483646 rack: null) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:24:32,690 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] kafka_1 | 2022-04-21 13:24:32,699 - INFO [data-plane-kafka-request-handler-5:Logging@66] - [GroupCoordinator 1]: Dynamic member with unknown member id joins group contact_db-dbhistory in Empty state. Created a new member id contact_db-dbhistory-91872b52-ca00-4b2e-9b32-58f7462ba07d and request the member to rejoin with this id. connect_1 | 2022-04-21 13:24:32,701 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Request joining group due to: need to re-join with the given member-id [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:24:32,701 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] kafka_1 | 2022-04-21 13:24:32,704 - INFO [data-plane-kafka-request-handler-6:Logging@66] - [GroupCoordinator 1]: Preparing to rebalance group contact_db-dbhistory in state PreparingRebalance with old generation 0 (__consumer_offsets-26) (reason: Adding new member contact_db-dbhistory-91872b52-ca00-4b2e-9b32-58f7462ba07d with group instance id None) kafka_1 | 2022-04-21 13:24:32,705 - INFO [executor-Rebalance:Logging@66] - [GroupCoordinator 1]: Stabilized group contact_db-dbhistory generation 1 (__consumer_offsets-26) with 1 members connect_1 | 2022-04-21 13:24:32,707 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Successfully joined group with generation Generation{generationId=1, memberId='contact_db-dbhistory-91872b52-ca00-4b2e-9b32-58f7462ba07d', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:24:32,710 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Finished assignment for group at generation 1: {contact_db-dbhistory-91872b52-ca00-4b2e-9b32-58f7462ba07d=Assignment(partitions=[contact_db.schema-changes-0])} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] kafka_1 | 2022-04-21 13:24:32,715 - INFO [data-plane-kafka-request-handler-7:Logging@66] - [GroupCoordinator 1]: Assignment received from leader contact_db-dbhistory-91872b52-ca00-4b2e-9b32-58f7462ba07d for group contact_db-dbhistory for generation 1. The group has 1 members, 0 of which are static. connect_1 | 2022-04-21 13:24:32,719 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Successfully synced group in generation Generation{generationId=1, memberId='contact_db-dbhistory-91872b52-ca00-4b2e-9b32-58f7462ba07d', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:24:32,720 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Notifying assignor about the new Assignment(partitions=[contact_db.schema-changes-0]) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:24:32,720 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Adding newly assigned partitions: contact_db.schema-changes-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:24:32,740 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Found no committed offset for partition contact_db.schema-changes-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:24:32,743 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting offset for partition contact_db.schema-changes-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:24:33,307 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Revoke previously assigned partitions contact_db.schema-changes-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:24:33,307 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Member contact_db-dbhistory-91872b52-ca00-4b2e-9b32-58f7462ba07d sending LeaveGroup request to coordinator 172.19.0.4:9092 (id: 2147483646 rack: null) due to the consumer is being closed [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:24:33,309 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting generation due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:24:33,309 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] kafka_1 | 2022-04-21 13:24:33,316 - INFO [data-plane-kafka-request-handler-2:Logging@66] - [GroupCoordinator 1]: Preparing to rebalance group contact_db-dbhistory in state PreparingRebalance with old generation 1 (__consumer_offsets-26) (reason: Removing member contact_db-dbhistory-91872b52-ca00-4b2e-9b32-58f7462ba07d on LeaveGroup) kafka_1 | 2022-04-21 13:24:33,319 - INFO [data-plane-kafka-request-handler-2:Logging@66] - [GroupCoordinator 1]: Group contact_db-dbhistory with generation 2 is now empty (__consumer_offsets-26) kafka_1 | 2022-04-21 13:24:33,324 - INFO [data-plane-kafka-request-handler-2:Logging@66] - [GroupCoordinator 1]: Member MemberMetadata(memberId=contact_db-dbhistory-91872b52-ca00-4b2e-9b32-58f7462ba07d, groupInstanceId=None, clientId=contact_db-dbhistory, clientHost=/172.19.0.5, sessionTimeoutMs=10000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, cooperative-sticky)) has left group contact_db-dbhistory through explicit `LeaveGroup` request connect_1 | 2022-04-21 13:24:33,328 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:33,328 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:33,329 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:24:33,330 INFO || App info kafka.consumer for contact_db-dbhistory unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:24:33,331 INFO || Finished database history recovery of 2 change(s) in 670 ms [io.debezium.relational.history.DatabaseHistoryMetrics] connect_1 | 2022-04-21 13:24:33,367 INFO || Reconnecting after finishing schema recovery [io.debezium.connector.mysql.MySqlConnectorTask] connect_1 | 2022-04-21 13:24:33,372 INFO || Get all known binlogs from MySQL [io.debezium.connector.mysql.MySqlConnection] connect_1 | 2022-04-21 13:24:33,375 INFO || MySQL has the binlog file 'mysql-bin.000004' required by the connector [io.debezium.connector.mysql.MySqlConnectorTask] connect_1 | 2022-04-21 13:24:33,403 INFO || Requested thread factory for connector MySqlConnector, id = contact_db named = change-event-source-coordinator [io.debezium.util.Threads] connect_1 | 2022-04-21 13:24:33,405 INFO || Creating thread debezium-mysqlconnector-contact_db-change-event-source-coordinator [io.debezium.util.Threads] connect_1 | 2022-04-21 13:24:33,405 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Source task finished initialization and start [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:24:33,406 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Executing source task [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:24:33,411 INFO MySQL|contact_db|snapshot Metrics registered [io.debezium.pipeline.ChangeEventSourceCoordinator] connect_1 | 2022-04-21 13:24:33,412 INFO MySQL|contact_db|snapshot Context created [io.debezium.pipeline.ChangeEventSourceCoordinator] connect_1 | 2022-04-21 13:24:33,420 INFO MySQL|contact_db|snapshot A previous offset indicating a completed snapshot has been found. Neither schema nor data will be snapshotted. [io.debezium.connector.mysql.MySqlSnapshotChangeEventSource] connect_1 | 2022-04-21 13:24:33,422 INFO MySQL|contact_db|snapshot Snapshot ended with SnapshotResult [status=SKIPPED, offset=MySqlOffsetContext [sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, sourceInfo=SourceInfo [currentGtid=null, currentBinlogFilename=mysql-bin.000004, currentBinlogPosition=9138, currentRowNumber=0, serverId=0, sourceTime=null, threadId=-1, currentQuery=null, tableIds=[], databaseName=null], snapshotCompleted=false, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0], restartGtidSet=null, currentGtidSet=null, restartBinlogFilename=mysql-bin.000004, restartBinlogPosition=9138, restartRowsToSkip=1, restartEventsToSkip=2, currentEventLengthInBytes=0, inTransaction=false, transactionId=null, incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, maximumKey=null]]] [io.debezium.pipeline.ChangeEventSourceCoordinator] connect_1 | 2022-04-21 13:24:33,427 INFO MySQL|contact_db|streaming Requested thread factory for connector MySqlConnector, id = contact_db named = binlog-client [io.debezium.util.Threads] connect_1 | 2022-04-21 13:24:33,429 INFO MySQL|contact_db|streaming Starting streaming [io.debezium.pipeline.ChangeEventSourceCoordinator] mysql_1 | mbind: Operation not permitted connect_1 | 2022-04-21 13:24:33,440 INFO MySQL|contact_db|streaming Skip 2 events on streaming start [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] connect_1 | 2022-04-21 13:24:33,441 INFO MySQL|contact_db|streaming Skip 1 rows on streaming start [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] connect_1 | 2022-04-21 13:24:33,441 INFO MySQL|contact_db|streaming Creating thread debezium-mysqlconnector-contact_db-binlog-client [io.debezium.util.Threads] connect_1 | 2022-04-21 13:24:33,445 INFO MySQL|contact_db|streaming Creating thread debezium-mysqlconnector-contact_db-binlog-client [io.debezium.util.Threads] mysql_1 | mbind: Operation not permitted connect_1 | Apr 21, 2022 1:24:33 PM com.github.shyiko.mysql.binlog.BinaryLogClient connect connect_1 | INFO: Connected to 172.19.0.3:3306 at mysql-bin.000004/9138 (sid:438567, cid:10) connect_1 | 2022-04-21 13:24:33,461 INFO MySQL|contact_db|binlog Connected to MySQL binlog at 172.19.0.3:3306, starting at MySqlOffsetContext [sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, sourceInfo=SourceInfo [currentGtid=null, currentBinlogFilename=mysql-bin.000004, currentBinlogPosition=9138, currentRowNumber=0, serverId=0, sourceTime=null, threadId=-1, currentQuery=null, tableIds=[], databaseName=null], snapshotCompleted=false, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0], restartGtidSet=null, currentGtidSet=null, restartBinlogFilename=mysql-bin.000004, restartBinlogPosition=9138, restartRowsToSkip=1, restartEventsToSkip=2, currentEventLengthInBytes=0, inTransaction=false, transactionId=null, incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, maximumKey=null]] [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] connect_1 | 2022-04-21 13:24:33,461 INFO MySQL|contact_db|streaming Waiting for keepalive thread to start [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] connect_1 | 2022-04-21 13:24:33,462 INFO MySQL|contact_db|binlog Creating thread debezium-mysqlconnector-contact_db-binlog-client [io.debezium.util.Threads] connect_1 | 2022-04-21 13:24:33,562 INFO MySQL|contact_db|streaming Keepalive thread is running [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] kafka_1 | 2022-04-21 13:24:49,377 - INFO [data-plane-kafka-request-handler-0:Logging@66] - [GroupCoordinator 1]: Preparing to rebalance group console-consumer-83427 in state PreparingRebalance with old generation 1 (__consumer_offsets-45) (reason: Removing member console-consumer-1a523512-c5fa-4c5d-b8f3-e39ea5a248b8 on LeaveGroup) kafka_1 | 2022-04-21 13:24:49,377 - INFO [data-plane-kafka-request-handler-0:Logging@66] - [GroupCoordinator 1]: Group console-consumer-83427 with generation 2 is now empty (__consumer_offsets-45) kafka_1 | 2022-04-21 13:24:49,379 - INFO [data-plane-kafka-request-handler-0:Logging@66] - [GroupCoordinator 1]: Member MemberMetadata(memberId=console-consumer-1a523512-c5fa-4c5d-b8f3-e39ea5a248b8, groupInstanceId=None, clientId=console-consumer, clientHost=/172.19.0.1, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range)) has left group console-consumer-83427 through explicit `LeaveGroup` request kafka_1 | 2022-04-21 13:25:01,589 - INFO [data-plane-kafka-request-handler-7:Logging@66] - [GroupCoordinator 1]: Dynamic member with unknown member id joins group console-consumer-20391 in Empty state. Created a new member id console-consumer-2be32988-3279-4264-9bdf-3db186b86bb5 and request the member to rejoin with this id. kafka_1 | 2022-04-21 13:25:01,593 - INFO [data-plane-kafka-request-handler-0:Logging@66] - [GroupCoordinator 1]: Preparing to rebalance group console-consumer-20391 in state PreparingRebalance with old generation 0 (__consumer_offsets-46) (reason: Adding new member console-consumer-2be32988-3279-4264-9bdf-3db186b86bb5 with group instance id None) kafka_1 | 2022-04-21 13:25:01,594 - INFO [executor-Rebalance:Logging@66] - [GroupCoordinator 1]: Stabilized group console-consumer-20391 generation 1 (__consumer_offsets-46) with 1 members kafka_1 | 2022-04-21 13:25:01,605 - INFO [data-plane-kafka-request-handler-2:Logging@66] - [GroupCoordinator 1]: Assignment received from leader console-consumer-2be32988-3279-4264-9bdf-3db186b86bb5 for group console-consumer-20391 for generation 1. The group has 1 members, 0 of which are static. connect_1 | 2022-04-21 13:25:32,152 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:26:32,154 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:26:35,173 INFO || Successfully tested connection for jdbc:mysql://172.19.0.3:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&zeroDateTimeBehavior=CONVERT_TO_NULL&connectTimeout=30000 with user 'root' [io.debezium.connector.mysql.MySqlConnector] connect_1 | 2022-04-21 13:26:35,176 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] connect_1 | 2022-04-21 13:26:35,179 INFO || AbstractConfig values: connect_1 | [org.apache.kafka.common.config.AbstractConfig] connect_1 | 2022-04-21 13:26:35,198 INFO || [Worker clientId=connect-1, groupId=1] Connector kafka-contact-connector config updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:26:35,200 INFO || [Worker clientId=connect-1, groupId=1] Handling connector-only config update by restarting connector kafka-contact-connector [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:26:35,200 INFO || Stopping connector kafka-contact-connector [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:26:35,200 INFO || Scheduled shutdown for WorkerConnector{id=kafka-contact-connector} [org.apache.kafka.connect.runtime.WorkerConnector] connect_1 | 2022-04-21 13:26:35,201 INFO || Completed shutdown for WorkerConnector{id=kafka-contact-connector} [org.apache.kafka.connect.runtime.WorkerConnector] connect_1 | 2022-04-21 13:26:35,205 INFO || [Worker clientId=connect-1, groupId=1] Starting connector kafka-contact-connector [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:26:35,205 INFO || Creating connector kafka-contact-connector of type io.debezium.connector.mysql.MySqlConnector [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:26:35,205 INFO || SourceConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [filter] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig] connect_1 | 2022-04-21 13:26:35,206 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [filter] connect_1 | transforms.filter.condition = value.op == u connect_1 | transforms.filter.language = jsr223.groovy connect_1 | transforms.filter.negate = false connect_1 | transforms.filter.null.handling.mode = keep connect_1 | transforms.filter.predicate = connect_1 | transforms.filter.topic.regex = contact.debezium.changes connect_1 | transforms.filter.type = class io.debezium.transforms.Filter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 13:26:35,206 INFO || EnrichedSourceConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.default.exclude = [] connect_1 | topic.creation.default.include = [.*] connect_1 | topic.creation.default.partitions = 1 connect_1 | topic.creation.default.replication.factor = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [filter] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig] connect_1 | 2022-04-21 13:26:35,207 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.default.exclude = [] connect_1 | topic.creation.default.include = [.*] connect_1 | topic.creation.default.partitions = 1 connect_1 | topic.creation.default.replication.factor = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [filter] connect_1 | transforms.filter.condition = value.op == u connect_1 | transforms.filter.language = jsr223.groovy connect_1 | transforms.filter.negate = false connect_1 | transforms.filter.null.handling.mode = keep connect_1 | transforms.filter.predicate = connect_1 | transforms.filter.topic.regex = contact.debezium.changes connect_1 | transforms.filter.type = class io.debezium.transforms.Filter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 13:26:35,207 INFO || Instantiated connector kafka-contact-connector with version 1.9.0.Final of type class io.debezium.connector.mysql.MySqlConnector [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:26:35,207 INFO || Finished creating connector kafka-contact-connector [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:26:35,211 INFO || SourceConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [filter] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig] connect_1 | 2022-04-21 13:26:35,212 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [filter] connect_1 | transforms.filter.condition = value.op == u connect_1 | transforms.filter.language = jsr223.groovy connect_1 | transforms.filter.negate = false connect_1 | transforms.filter.null.handling.mode = keep connect_1 | transforms.filter.predicate = connect_1 | transforms.filter.topic.regex = contact.debezium.changes connect_1 | transforms.filter.type = class io.debezium.transforms.Filter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 13:26:35,212 INFO || EnrichedSourceConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.default.exclude = [] connect_1 | topic.creation.default.include = [.*] connect_1 | topic.creation.default.partitions = 1 connect_1 | topic.creation.default.replication.factor = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [filter] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig] connect_1 | 2022-04-21 13:26:35,212 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.default.exclude = [] connect_1 | topic.creation.default.include = [.*] connect_1 | topic.creation.default.partitions = 1 connect_1 | topic.creation.default.replication.factor = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [filter] connect_1 | transforms.filter.condition = value.op == u connect_1 | transforms.filter.language = jsr223.groovy connect_1 | transforms.filter.negate = false connect_1 | transforms.filter.null.handling.mode = keep connect_1 | transforms.filter.predicate = connect_1 | transforms.filter.topic.regex = contact.debezium.changes connect_1 | transforms.filter.type = class io.debezium.transforms.Filter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 13:26:35,238 INFO || [Worker clientId=connect-1, groupId=1] Tasks [kafka-contact-connector-0] configs updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:26:35,239 INFO || [Worker clientId=connect-1, groupId=1] Handling task config update by restarting tasks [kafka-contact-connector-0] [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:26:35,239 INFO || Stopping task kafka-contact-connector-0 [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:26:35,651 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:26:35,652 INFO || Stopping down connector [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:35,732 INFO MySQL|contact_db|streaming Finished streaming [io.debezium.pipeline.ChangeEventSourceCoordinator] connect_1 | 2022-04-21 13:26:35,733 INFO MySQL|contact_db|binlog Stopped reading binlog after 0 events, last recorded offset: {transaction_id=null, file=mysql-bin.000005, pos=157, server_id=223344, event=1} [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] connect_1 | 2022-04-21 13:26:35,735 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] connect_1 | 2022-04-21 13:26:35,736 INFO || [Producer clientId=contact_db-dbhistory] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer] connect_1 | 2022-04-21 13:26:35,738 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:35,738 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:35,738 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:35,738 INFO || App info kafka.producer for contact_db-dbhistory unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:35,739 INFO || [Producer clientId=connector-producer-kafka-contact-connector-0] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer] connect_1 | 2022-04-21 13:26:35,741 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:35,741 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:35,741 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:35,741 INFO || App info kafka.producer for connector-producer-kafka-contact-connector-0 unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:35,742 INFO || App info kafka.admin.client for connector-adminclient-kafka-contact-connector-0 unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:35,743 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:35,743 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:35,743 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:35,747 INFO || [Worker clientId=connect-1, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] connect_1 | 2022-04-21 13:26:35,747 INFO || [Worker clientId=connect-1, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] kafka_1 | 2022-04-21 13:26:35,749 - INFO [data-plane-kafka-request-handler-6:Logging@66] - [GroupCoordinator 1]: Preparing to rebalance group 1 in state PreparingRebalance with old generation 9 (__consumer_offsets-49) (reason: Leader connect-1-dde6a537-3dba-4000-9646-5d73d3d1e367 re-joining group during Stable) kafka_1 | 2022-04-21 13:26:35,750 - INFO [data-plane-kafka-request-handler-6:Logging@66] - [GroupCoordinator 1]: Stabilized group 1 generation 10 (__consumer_offsets-49) with 1 members connect_1 | 2022-04-21 13:26:35,752 INFO || [Worker clientId=connect-1, groupId=1] Successfully joined group with generation Generation{generationId=10, memberId='connect-1-dde6a537-3dba-4000-9646-5d73d3d1e367', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] kafka_1 | 2022-04-21 13:26:35,754 - INFO [data-plane-kafka-request-handler-2:Logging@66] - [GroupCoordinator 1]: Assignment received from leader connect-1-dde6a537-3dba-4000-9646-5d73d3d1e367 for group 1 for generation 10. The group has 1 members, 0 of which are static. connect_1 | 2022-04-21 13:26:35,756 INFO || [Worker clientId=connect-1, groupId=1] Successfully synced group in generation Generation{generationId=10, memberId='connect-1-dde6a537-3dba-4000-9646-5d73d3d1e367', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] connect_1 | 2022-04-21 13:26:35,756 INFO || [Worker clientId=connect-1, groupId=1] Joined group at generation 10 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-dde6a537-3dba-4000-9646-5d73d3d1e367', leaderUrl='http://172.19.0.5:8083/', offset=10, connectorIds=[kafka-contact-connector], taskIds=[kafka-contact-connector-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:26:35,757 INFO || [Worker clientId=connect-1, groupId=1] Starting connectors and tasks using config offset 10 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:26:35,757 INFO || [Worker clientId=connect-1, groupId=1] Starting task kafka-contact-connector-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:26:35,757 INFO || Creating task kafka-contact-connector-0 [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:26:35,758 INFO || ConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | transforms = [filter] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig] connect_1 | 2022-04-21 13:26:35,759 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | transforms = [filter] connect_1 | transforms.filter.condition = value.op == u connect_1 | transforms.filter.language = jsr223.groovy connect_1 | transforms.filter.negate = false connect_1 | transforms.filter.null.handling.mode = keep connect_1 | transforms.filter.predicate = connect_1 | transforms.filter.topic.regex = contact.debezium.changes connect_1 | transforms.filter.type = class io.debezium.transforms.Filter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 13:26:35,759 INFO || TaskConfig values: connect_1 | task.class = class io.debezium.connector.mysql.MySqlConnectorTask connect_1 | [org.apache.kafka.connect.runtime.TaskConfig] connect_1 | 2022-04-21 13:26:35,759 INFO || Instantiated task kafka-contact-connector-0 with version 1.9.0.Final of type io.debezium.connector.mysql.MySqlConnectorTask [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:26:35,759 INFO || JsonConverterConfig values: connect_1 | converter.type = key connect_1 | decimal.format = BASE64 connect_1 | schemas.cache.size = 1000 connect_1 | schemas.enable = false connect_1 | [org.apache.kafka.connect.json.JsonConverterConfig] connect_1 | 2022-04-21 13:26:35,759 INFO || JsonConverterConfig values: connect_1 | converter.type = value connect_1 | decimal.format = BASE64 connect_1 | schemas.cache.size = 1000 connect_1 | schemas.enable = false connect_1 | [org.apache.kafka.connect.json.JsonConverterConfig] connect_1 | 2022-04-21 13:26:35,759 INFO || Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task kafka-contact-connector-0 using the connector config [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:26:35,759 INFO || Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task kafka-contact-connector-0 using the connector config [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:26:35,759 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task kafka-contact-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:26:35,761 INFO || SourceConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [filter] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig] connect_1 | 2022-04-21 13:26:35,761 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [filter] connect_1 | transforms.filter.condition = value.op == u connect_1 | transforms.filter.language = jsr223.groovy connect_1 | transforms.filter.negate = false connect_1 | transforms.filter.null.handling.mode = keep connect_1 | transforms.filter.predicate = connect_1 | transforms.filter.topic.regex = contact.debezium.changes connect_1 | transforms.filter.type = class io.debezium.transforms.Filter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 13:26:35,762 INFO || EnrichedSourceConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.default.exclude = [] connect_1 | topic.creation.default.include = [.*] connect_1 | topic.creation.default.partitions = 1 connect_1 | topic.creation.default.replication.factor = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [filter] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig] connect_1 | 2022-04-21 13:26:35,763 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.default.exclude = [] connect_1 | topic.creation.default.include = [.*] connect_1 | topic.creation.default.partitions = 1 connect_1 | topic.creation.default.replication.factor = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [filter] connect_1 | transforms.filter.condition = value.op == u connect_1 | transforms.filter.language = jsr223.groovy connect_1 | transforms.filter.negate = false connect_1 | transforms.filter.null.handling.mode = keep connect_1 | transforms.filter.predicate = connect_1 | transforms.filter.topic.regex = contact.debezium.changes connect_1 | transforms.filter.type = class io.debezium.transforms.Filter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 13:26:35,764 INFO || Using language 'jsr223.groovy' to evaluate expression 'value.op == u' [io.debezium.transforms.Filter] connect_1 | 2022-04-21 13:26:36,341 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{io.debezium.transforms.Filter} [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 13:26:36,342 INFO || ProducerConfig values: connect_1 | acks = -1 connect_1 | batch.size = 16384 connect_1 | bootstrap.servers = [kafka:9092] connect_1 | buffer.memory = 33554432 connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = connector-producer-kafka-contact-connector-0 connect_1 | compression.type = none connect_1 | connections.max.idle.ms = 540000 connect_1 | delivery.timeout.ms = 2147483647 connect_1 | enable.idempotence = true connect_1 | interceptor.classes = [] connect_1 | key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer connect_1 | linger.ms = 0 connect_1 | max.block.ms = 9223372036854775807 connect_1 | max.in.flight.requests.per.connection = 1 connect_1 | max.request.size = 1048576 connect_1 | metadata.max.age.ms = 300000 connect_1 | metadata.max.idle.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner connect_1 | receive.buffer.bytes = 32768 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 2147483647 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | transaction.timeout.ms = 60000 connect_1 | transactional.id = null connect_1 | value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer connect_1 | [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:26:36,345 WARN || The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:26:36,345 WARN || The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:26:36,345 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,346 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,346 INFO || Kafka startTimeMs: 1650547596345 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,348 INFO || AdminClientConfig values: connect_1 | bootstrap.servers = [kafka:9092] connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = connector-adminclient-kafka-contact-connector-0 connect_1 | connections.max.idle.ms = 300000 connect_1 | default.api.timeout.ms = 60000 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 2147483647 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,351 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,351 WARN || The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,351 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,351 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,351 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,351 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,351 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,351 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,351 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,351 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,351 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,351 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,351 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,352 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,352 WARN || The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,352 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,352 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,352 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,352 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,352 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,352 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,352 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,352 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,352 INFO || Kafka startTimeMs: 1650547596352 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,354 INFO || [Producer clientId=connector-producer-kafka-contact-connector-0] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:26:36,354 INFO || [Worker clientId=connect-1, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 13:26:36,356 INFO || Starting MySqlConnectorTask with configuration: [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,356 INFO || connector.class = io.debezium.connector.mysql.MySqlConnector [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,356 INFO || snapshot.locking.mode = none [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,356 INFO || topic.creation.default.partitions = 1 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,356 INFO || tasks.max = 1 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,356 INFO || database.history.kafka.topic = contact_db.schema-changes [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,356 INFO || transforms.Reroute.key.field.name = universe [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,357 INFO || transforms = filter [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,357 INFO || include.schema.changes = true [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,357 INFO || transforms.Reroute.topic.replacement = contact.debezium.changes [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,357 INFO || topic.creation.default.replication.factor = 1 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,357 INFO || database.history.store.only.captured.tables.ddl = true [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,357 INFO || value.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,357 INFO || key.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,357 INFO || database.user = root [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,357 INFO || transforms.Reroute.type = io.debezium.transforms.ByLogicalTableRouter [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,357 INFO || transforms.filter.topic.regex = contact.debezium.changes [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,357 INFO || database.server.id = 438567 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,357 INFO || topic.creation.default.cleanup.policy = compact [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,357 INFO || database.history.kafka.bootstrap.servers = 172.19.0.4:9092 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || database.server.name = contact_db [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || transforms.filter.type = io.debezium.transforms.Filter [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || transforms.filter.language = jsr223.groovy [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || transforms.Reroute.topic.regex = contact_db\.splio3_data\.(.*) [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || database.port = 3306 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || transforms.Reroute.key.field.replacement = $1 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || key.converter.schemas.enable = false [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || column.exclude.list = .*\.last_.*,.*\.nouverts.*,.*\.nclicks.*,.*\.nenvois,.*\.nbounces,.*\.nbounces_sms,.*\.nclickssms,.*\.nx,.*\.nsms,.*\.nclicksms,.*\.ntransfo,.*\.npurchases,.*\.sommeca [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || transforms.filter.condition = value.op == u [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || database.serverTimezone = Europe/Paris [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || task.class = io.debezium.connector.mysql.MySqlConnectorTask [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || database.hostname = 172.19.0.3 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || database.connectionTimeZone = Europe/Paris [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || database.password = ******** [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || value.converter.schemas.enable = false [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || name = kafka-contact-connector [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || transforms.Reroute.key.field.regex = contact_db\.splio3_data\.(.*) [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || table.include.list = splio3_data\.(.*) [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || database.include.list = splio3_data [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,359 INFO || snapshot.mode = schema_only [io.debezium.connector.common.BaseSourceTask] mysql_1 | mbind: Operation not permitted connect_1 | 2022-04-21 13:26:36,376 INFO || Found previous partition offset MySqlPartition [sourcePartition={server=contact_db}]: {transaction_id=null, file=mysql-bin.000004, pos=9138, row=1, event=2} [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:26:36,388 INFO || KafkaDatabaseHistory Consumer config: {key.deserializer=org.apache.kafka.common.serialization.StringDeserializer, value.deserializer=org.apache.kafka.common.serialization.StringDeserializer, enable.auto.commit=false, group.id=contact_db-dbhistory, bootstrap.servers=172.19.0.4:9092, fetch.min.bytes=1, session.timeout.ms=10000, auto.offset.reset=earliest, client.id=contact_db-dbhistory} [io.debezium.relational.history.KafkaDatabaseHistory] connect_1 | 2022-04-21 13:26:36,388 INFO || KafkaDatabaseHistory Producer config: {retries=1, value.serializer=org.apache.kafka.common.serialization.StringSerializer, acks=1, batch.size=32768, max.block.ms=10000, bootstrap.servers=172.19.0.4:9092, buffer.memory=1048576, key.serializer=org.apache.kafka.common.serialization.StringSerializer, client.id=contact_db-dbhistory, linger.ms=0} [io.debezium.relational.history.KafkaDatabaseHistory] connect_1 | 2022-04-21 13:26:36,388 INFO || Requested thread factory for connector MySqlConnector, id = contact_db named = db-history-config-check [io.debezium.util.Threads] connect_1 | 2022-04-21 13:26:36,389 INFO || ProducerConfig values: connect_1 | acks = 1 connect_1 | batch.size = 32768 connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | buffer.memory = 1048576 connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory connect_1 | compression.type = none connect_1 | connections.max.idle.ms = 540000 connect_1 | delivery.timeout.ms = 120000 connect_1 | enable.idempotence = true connect_1 | interceptor.classes = [] connect_1 | key.serializer = class org.apache.kafka.common.serialization.StringSerializer connect_1 | linger.ms = 0 connect_1 | max.block.ms = 10000 connect_1 | max.in.flight.requests.per.connection = 5 connect_1 | max.request.size = 1048576 connect_1 | metadata.max.age.ms = 300000 connect_1 | metadata.max.idle.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner connect_1 | receive.buffer.bytes = 32768 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 1 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | transaction.timeout.ms = 60000 connect_1 | transactional.id = null connect_1 | value.serializer = class org.apache.kafka.common.serialization.StringSerializer connect_1 | [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 13:26:36,392 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,392 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,392 INFO || Kafka startTimeMs: 1650547596392 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,393 INFO || Closing connection before starting schema recovery [io.debezium.connector.mysql.MySqlConnectorTask] connect_1 | 2022-04-21 13:26:36,395 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] connect_1 | 2022-04-21 13:26:36,395 INFO || ConsumerConfig values: connect_1 | allow.auto.create.topics = true connect_1 | auto.commit.interval.ms = 5000 connect_1 | auto.offset.reset = earliest connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | check.crcs = true connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory connect_1 | client.rack = connect_1 | connections.max.idle.ms = 540000 connect_1 | default.api.timeout.ms = 60000 connect_1 | enable.auto.commit = false connect_1 | exclude.internal.topics = true connect_1 | fetch.max.bytes = 52428800 connect_1 | fetch.max.wait.ms = 500 connect_1 | fetch.min.bytes = 1 connect_1 | group.id = contact_db-dbhistory connect_1 | group.instance.id = null connect_1 | heartbeat.interval.ms = 3000 connect_1 | interceptor.classes = [] connect_1 | internal.leave.group.on.close = true connect_1 | internal.throw.on.fetch.stable.offset.unsupported = false connect_1 | isolation.level = read_uncommitted connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | max.partition.fetch.bytes = 1048576 connect_1 | max.poll.interval.ms = 300000 connect_1 | max.poll.records = 500 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 10000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:26:36,398 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,398 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,398 INFO || Kafka startTimeMs: 1650547596398 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,399 INFO || [Producer clientId=contact_db-dbhistory] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:26:36,403 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:26:36,405 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting generation due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:26:36,405 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:26:36,406 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:36,406 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:36,406 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:36,407 INFO || App info kafka.consumer for contact_db-dbhistory unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,408 INFO || ConsumerConfig values: connect_1 | allow.auto.create.topics = true connect_1 | auto.commit.interval.ms = 5000 connect_1 | auto.offset.reset = earliest connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | check.crcs = true connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory connect_1 | client.rack = connect_1 | connections.max.idle.ms = 540000 connect_1 | default.api.timeout.ms = 60000 connect_1 | enable.auto.commit = false connect_1 | exclude.internal.topics = true connect_1 | fetch.max.bytes = 52428800 connect_1 | fetch.max.wait.ms = 500 connect_1 | fetch.min.bytes = 1 connect_1 | group.id = contact_db-dbhistory connect_1 | group.instance.id = null connect_1 | heartbeat.interval.ms = 3000 connect_1 | interceptor.classes = [] connect_1 | internal.leave.group.on.close = true connect_1 | internal.throw.on.fetch.stable.offset.unsupported = false connect_1 | isolation.level = read_uncommitted connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | max.partition.fetch.bytes = 1048576 connect_1 | max.poll.interval.ms = 300000 connect_1 | max.poll.records = 500 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 10000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:26:36,411 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,411 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,411 INFO || Kafka startTimeMs: 1650547596411 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,411 INFO || Creating thread debezium-mysqlconnector-contact_db-db-history-config-check [io.debezium.util.Threads] connect_1 | 2022-04-21 13:26:36,412 INFO || AdminClientConfig values: connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory-topic-check connect_1 | connections.max.idle.ms = 300000 connect_1 | default.api.timeout.ms = 60000 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 1 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,416 WARN || The configuration 'value.serializer' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,416 WARN || The configuration 'acks' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,416 WARN || The configuration 'batch.size' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,416 WARN || The configuration 'max.block.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,416 WARN || The configuration 'buffer.memory' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,416 WARN || The configuration 'key.serializer' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,416 WARN || The configuration 'linger.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 13:26:36,416 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,416 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,416 INFO || Kafka startTimeMs: 1650547596416 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,417 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting the last seen epoch of partition contact_db.schema-changes-0 to 0 since the associated topicId changed from null to I-6FAdvZT2KHBSXH0d61iA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:26:36,418 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:26:36,424 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting generation due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:26:36,424 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:26:36,425 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:36,425 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:36,425 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:36,427 INFO || App info kafka.consumer for contact_db-dbhistory unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,428 INFO || ConsumerConfig values: connect_1 | allow.auto.create.topics = true connect_1 | auto.commit.interval.ms = 5000 connect_1 | auto.offset.reset = earliest connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | check.crcs = true connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory connect_1 | client.rack = connect_1 | connections.max.idle.ms = 540000 connect_1 | default.api.timeout.ms = 60000 connect_1 | enable.auto.commit = false connect_1 | exclude.internal.topics = true connect_1 | fetch.max.bytes = 52428800 connect_1 | fetch.max.wait.ms = 500 connect_1 | fetch.min.bytes = 1 connect_1 | group.id = contact_db-dbhistory connect_1 | group.instance.id = null connect_1 | heartbeat.interval.ms = 3000 connect_1 | interceptor.classes = [] connect_1 | internal.leave.group.on.close = true connect_1 | internal.throw.on.fetch.stable.offset.unsupported = false connect_1 | isolation.level = read_uncommitted connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | max.partition.fetch.bytes = 1048576 connect_1 | max.poll.interval.ms = 300000 connect_1 | max.poll.records = 500 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 10000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:26:36,434 INFO || Database history topic 'contact_db.schema-changes' has correct settings [io.debezium.relational.history.KafkaDatabaseHistory] connect_1 | 2022-04-21 13:26:36,434 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,434 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,434 INFO || Kafka startTimeMs: 1650547596434 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,435 INFO || App info kafka.admin.client for contact_db-dbhistory-topic-check unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,436 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:36,437 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:36,437 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:36,439 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:26:36,441 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting generation due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:26:36,441 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:26:36,441 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:36,441 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:36,441 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:36,442 INFO || App info kafka.consumer for contact_db-dbhistory unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,442 INFO || ConsumerConfig values: connect_1 | allow.auto.create.topics = true connect_1 | auto.commit.interval.ms = 5000 connect_1 | auto.offset.reset = earliest connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | check.crcs = true connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory connect_1 | client.rack = connect_1 | connections.max.idle.ms = 540000 connect_1 | default.api.timeout.ms = 60000 connect_1 | enable.auto.commit = false connect_1 | exclude.internal.topics = true connect_1 | fetch.max.bytes = 52428800 connect_1 | fetch.max.wait.ms = 500 connect_1 | fetch.min.bytes = 1 connect_1 | group.id = contact_db-dbhistory connect_1 | group.instance.id = null connect_1 | heartbeat.interval.ms = 3000 connect_1 | interceptor.classes = [] connect_1 | internal.leave.group.on.close = true connect_1 | internal.throw.on.fetch.stable.offset.unsupported = false connect_1 | isolation.level = read_uncommitted connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | max.partition.fetch.bytes = 1048576 connect_1 | max.poll.interval.ms = 300000 connect_1 | max.poll.records = 500 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 10000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:26:36,445 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,445 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,445 INFO || Kafka startTimeMs: 1650547596445 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,449 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting the last seen epoch of partition contact_db.schema-changes-0 to 0 since the associated topicId changed from null to I-6FAdvZT2KHBSXH0d61iA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:26:36,449 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:26:36,455 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting generation due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:26:36,455 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:26:36,456 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:36,456 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:36,456 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:36,457 INFO || App info kafka.consumer for contact_db-dbhistory unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,457 INFO || Started database history recovery [io.debezium.relational.history.DatabaseHistoryMetrics] connect_1 | 2022-04-21 13:26:36,458 INFO || ConsumerConfig values: connect_1 | allow.auto.create.topics = true connect_1 | auto.commit.interval.ms = 5000 connect_1 | auto.offset.reset = earliest connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | check.crcs = true connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory connect_1 | client.rack = connect_1 | connections.max.idle.ms = 540000 connect_1 | default.api.timeout.ms = 60000 connect_1 | enable.auto.commit = false connect_1 | exclude.internal.topics = true connect_1 | fetch.max.bytes = 52428800 connect_1 | fetch.max.wait.ms = 500 connect_1 | fetch.min.bytes = 1 connect_1 | group.id = contact_db-dbhistory connect_1 | group.instance.id = null connect_1 | heartbeat.interval.ms = 3000 connect_1 | interceptor.classes = [] connect_1 | internal.leave.group.on.close = true connect_1 | internal.throw.on.fetch.stable.offset.unsupported = false connect_1 | isolation.level = read_uncommitted connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | max.partition.fetch.bytes = 1048576 connect_1 | max.poll.interval.ms = 300000 connect_1 | max.poll.records = 500 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 10000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 13:26:36,461 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,461 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,461 INFO || Kafka startTimeMs: 1650547596461 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,462 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Subscribed to topic(s): contact_db.schema-changes [org.apache.kafka.clients.consumer.KafkaConsumer] connect_1 | 2022-04-21 13:26:36,467 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting the last seen epoch of partition contact_db.schema-changes-0 to 0 since the associated topicId changed from null to I-6FAdvZT2KHBSXH0d61iA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:26:36,467 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:26:36,473 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Discovered group coordinator 172.19.0.4:9092 (id: 2147483646 rack: null) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:26:36,473 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] kafka_1 | 2022-04-21 13:26:36,476 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Dynamic member with unknown member id joins group contact_db-dbhistory in Empty state. Created a new member id contact_db-dbhistory-efefe8ce-e3fe-4aa3-829a-5892074aaaba and request the member to rejoin with this id. connect_1 | 2022-04-21 13:26:36,477 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Request joining group due to: need to re-join with the given member-id [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:26:36,477 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] kafka_1 | 2022-04-21 13:26:36,479 - INFO [data-plane-kafka-request-handler-4:Logging@66] - [GroupCoordinator 1]: Preparing to rebalance group contact_db-dbhistory in state PreparingRebalance with old generation 2 (__consumer_offsets-26) (reason: Adding new member contact_db-dbhistory-efefe8ce-e3fe-4aa3-829a-5892074aaaba with group instance id None) kafka_1 | 2022-04-21 13:26:36,480 - INFO [executor-Rebalance:Logging@66] - [GroupCoordinator 1]: Stabilized group contact_db-dbhistory generation 3 (__consumer_offsets-26) with 1 members connect_1 | 2022-04-21 13:26:36,482 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Successfully joined group with generation Generation{generationId=3, memberId='contact_db-dbhistory-efefe8ce-e3fe-4aa3-829a-5892074aaaba', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:26:36,482 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Finished assignment for group at generation 3: {contact_db-dbhistory-efefe8ce-e3fe-4aa3-829a-5892074aaaba=Assignment(partitions=[contact_db.schema-changes-0])} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] kafka_1 | 2022-04-21 13:26:36,484 - INFO [data-plane-kafka-request-handler-5:Logging@66] - [GroupCoordinator 1]: Assignment received from leader contact_db-dbhistory-efefe8ce-e3fe-4aa3-829a-5892074aaaba for group contact_db-dbhistory for generation 3. The group has 1 members, 0 of which are static. connect_1 | 2022-04-21 13:26:36,487 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Successfully synced group in generation Generation{generationId=3, memberId='contact_db-dbhistory-efefe8ce-e3fe-4aa3-829a-5892074aaaba', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:26:36,488 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Notifying assignor about the new Assignment(partitions=[contact_db.schema-changes-0]) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:26:36,488 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Adding newly assigned partitions: contact_db.schema-changes-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:26:36,490 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Found no committed offset for partition contact_db.schema-changes-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:26:36,492 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting offset for partition contact_db.schema-changes-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 13:26:36,540 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Revoke previously assigned partitions contact_db.schema-changes-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:26:36,540 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Member contact_db-dbhistory-efefe8ce-e3fe-4aa3-829a-5892074aaaba sending LeaveGroup request to coordinator 172.19.0.4:9092 (id: 2147483646 rack: null) due to the consumer is being closed [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] kafka_1 | 2022-04-21 13:26:36,541 - INFO [data-plane-kafka-request-handler-1:Logging@66] - [GroupCoordinator 1]: Preparing to rebalance group contact_db-dbhistory in state PreparingRebalance with old generation 3 (__consumer_offsets-26) (reason: Removing member contact_db-dbhistory-efefe8ce-e3fe-4aa3-829a-5892074aaaba on LeaveGroup) kafka_1 | 2022-04-21 13:26:36,541 - INFO [data-plane-kafka-request-handler-1:Logging@66] - [GroupCoordinator 1]: Group contact_db-dbhistory with generation 4 is now empty (__consumer_offsets-26) connect_1 | 2022-04-21 13:26:36,541 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting generation due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 13:26:36,541 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] kafka_1 | 2022-04-21 13:26:36,543 - INFO [data-plane-kafka-request-handler-1:Logging@66] - [GroupCoordinator 1]: Member MemberMetadata(memberId=contact_db-dbhistory-efefe8ce-e3fe-4aa3-829a-5892074aaaba, groupInstanceId=None, clientId=contact_db-dbhistory, clientHost=/172.19.0.5, sessionTimeoutMs=10000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, cooperative-sticky)) has left group contact_db-dbhistory through explicit `LeaveGroup` request connect_1 | 2022-04-21 13:26:36,544 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:36,544 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:36,544 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 13:26:36,545 INFO || App info kafka.consumer for contact_db-dbhistory unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 13:26:36,546 INFO || Finished database history recovery of 2 change(s) in 88 ms [io.debezium.relational.history.DatabaseHistoryMetrics] connect_1 | 2022-04-21 13:26:36,557 INFO || Reconnecting after finishing schema recovery [io.debezium.connector.mysql.MySqlConnectorTask] connect_1 | 2022-04-21 13:26:36,562 INFO || Get all known binlogs from MySQL [io.debezium.connector.mysql.MySqlConnection] connect_1 | 2022-04-21 13:26:36,563 INFO || MySQL has the binlog file 'mysql-bin.000004' required by the connector [io.debezium.connector.mysql.MySqlConnectorTask] connect_1 | 2022-04-21 13:26:36,564 INFO || Requested thread factory for connector MySqlConnector, id = contact_db named = change-event-source-coordinator [io.debezium.util.Threads] connect_1 | 2022-04-21 13:26:36,564 INFO || Creating thread debezium-mysqlconnector-contact_db-change-event-source-coordinator [io.debezium.util.Threads] connect_1 | 2022-04-21 13:26:36,564 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Source task finished initialization and start [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:26:36,568 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Executing source task [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:26:36,568 INFO MySQL|contact_db|snapshot Metrics registered [io.debezium.pipeline.ChangeEventSourceCoordinator] connect_1 | 2022-04-21 13:26:36,568 INFO MySQL|contact_db|snapshot Context created [io.debezium.pipeline.ChangeEventSourceCoordinator] connect_1 | 2022-04-21 13:26:36,568 INFO MySQL|contact_db|snapshot A previous offset indicating a completed snapshot has been found. Neither schema nor data will be snapshotted. [io.debezium.connector.mysql.MySqlSnapshotChangeEventSource] connect_1 | 2022-04-21 13:26:36,569 INFO MySQL|contact_db|snapshot Snapshot ended with SnapshotResult [status=SKIPPED, offset=MySqlOffsetContext [sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, sourceInfo=SourceInfo [currentGtid=null, currentBinlogFilename=mysql-bin.000004, currentBinlogPosition=9138, currentRowNumber=0, serverId=0, sourceTime=null, threadId=-1, currentQuery=null, tableIds=[], databaseName=null], snapshotCompleted=false, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0], restartGtidSet=null, currentGtidSet=null, restartBinlogFilename=mysql-bin.000004, restartBinlogPosition=9138, restartRowsToSkip=1, restartEventsToSkip=2, currentEventLengthInBytes=0, inTransaction=false, transactionId=null, incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, maximumKey=null]]] [io.debezium.pipeline.ChangeEventSourceCoordinator] connect_1 | 2022-04-21 13:26:36,570 INFO MySQL|contact_db|streaming Requested thread factory for connector MySqlConnector, id = contact_db named = binlog-client [io.debezium.util.Threads] connect_1 | 2022-04-21 13:26:36,570 INFO MySQL|contact_db|streaming Starting streaming [io.debezium.pipeline.ChangeEventSourceCoordinator] mysql_1 | mbind: Operation not permitted connect_1 | 2022-04-21 13:26:36,575 INFO MySQL|contact_db|streaming Skip 2 events on streaming start [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] connect_1 | 2022-04-21 13:26:36,576 INFO MySQL|contact_db|streaming Skip 1 rows on streaming start [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] connect_1 | 2022-04-21 13:26:36,576 INFO MySQL|contact_db|streaming Creating thread debezium-mysqlconnector-contact_db-binlog-client [io.debezium.util.Threads] connect_1 | 2022-04-21 13:26:36,577 INFO MySQL|contact_db|streaming Creating thread debezium-mysqlconnector-contact_db-binlog-client [io.debezium.util.Threads] mysql_1 | mbind: Operation not permitted connect_1 | Apr 21, 2022 1:26:36 PM com.github.shyiko.mysql.binlog.BinaryLogClient connect connect_1 | INFO: Connected to 172.19.0.3:3306 at mysql-bin.000004/9138 (sid:438567, cid:15) connect_1 | 2022-04-21 13:26:36,581 INFO MySQL|contact_db|binlog Connected to MySQL binlog at 172.19.0.3:3306, starting at MySqlOffsetContext [sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, sourceInfo=SourceInfo [currentGtid=null, currentBinlogFilename=mysql-bin.000004, currentBinlogPosition=9138, currentRowNumber=0, serverId=0, sourceTime=null, threadId=-1, currentQuery=null, tableIds=[], databaseName=null], snapshotCompleted=false, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0], restartGtidSet=null, currentGtidSet=null, restartBinlogFilename=mysql-bin.000004, restartBinlogPosition=9138, restartRowsToSkip=1, restartEventsToSkip=2, currentEventLengthInBytes=0, inTransaction=false, transactionId=null, incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, maximumKey=null]] [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] connect_1 | 2022-04-21 13:26:36,581 INFO MySQL|contact_db|streaming Waiting for keepalive thread to start [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] connect_1 | 2022-04-21 13:26:36,581 INFO MySQL|contact_db|binlog Creating thread debezium-mysqlconnector-contact_db-binlog-client [io.debezium.util.Threads] connect_1 | 2022-04-21 13:26:36,681 INFO MySQL|contact_db|streaming Keepalive thread is running [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] connect_1 | 2022-04-21 13:27:36,354 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:28:17,770 INFO || 1 records sent during previous 00:01:42.01, last recorded offset: {transaction_id=null, ts_sec=1650547697, file=mysql-bin.000005, pos=236, row=1, server_id=223344, event=2} [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 13:28:17,778 INFO || The task will send records to topic 'contact_db.splio3_data.qa_journey' for the first time. Checking whether topic exists [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:28:17,786 INFO || Creating topic 'contact_db.splio3_data.qa_journey' [org.apache.kafka.connect.runtime.WorkerSourceTask] kafka_1 | 2022-04-21 13:28:17,806 - INFO [data-plane-kafka-request-handler-0:Logging@66] - Creating topic contact_db.splio3_data.qa_journey with configuration {cleanup.policy=compact} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) kafka_1 | 2022-04-21 13:28:17,859 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(contact_db.splio3_data.qa_journey-0) kafka_1 | 2022-04-21 13:28:17,869 - INFO [data-plane-kafka-request-handler-3:UnifiedLog$@1722] - [LogLoader partition=contact_db.splio3_data.qa_journey-0, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2 kafka_1 | 2022-04-21 13:28:17,873 - INFO [data-plane-kafka-request-handler-3:Logging@66] - Created log for partition contact_db.splio3_data.qa_journey-0 in /kafka/data/1/contact_db.splio3_data.qa_journey-0 with properties {cleanup.policy=compact} kafka_1 | 2022-04-21 13:28:17,874 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition contact_db.splio3_data.qa_journey-0 broker=1] No checkpointed highwatermark is found for partition contact_db.splio3_data.qa_journey-0 kafka_1 | 2022-04-21 13:28:17,875 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [Partition contact_db.splio3_data.qa_journey-0 broker=1] Log loaded for partition contact_db.splio3_data.qa_journey-0 with initial high watermark 0 connect_1 | 2022-04-21 13:28:17,887 INFO || Created topic (name=contact_db.splio3_data.qa_journey, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at kafka:9092 [org.apache.kafka.connect.util.TopicAdmin] connect_1 | 2022-04-21 13:28:17,888 INFO || Created topic '(name=contact_db.splio3_data.qa_journey, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact})' using creation group TopicCreationGroup{name='default', inclusionPattern=.*, exclusionPattern=, numPartitions=1, replicationFactor=1, otherConfigs={cleanup.policy=compact}} [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:28:17,893 INFO || [Producer clientId=connector-producer-kafka-contact-connector-0] Resetting the last seen epoch of partition contact_db.splio3_data.qa_journey-0 to 0 since the associated topicId changed from null to Q6RwfP6LTO2l1sZg4ECt3A [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,355 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:28:36,364 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-0 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,364 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-5 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,365 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-10 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,365 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-20 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,365 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-15 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,365 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-9 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,365 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-11 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,365 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-4 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,365 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-16 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,365 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-17 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,365 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-3 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,365 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-24 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,365 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-23 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,365 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-13 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,365 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-18 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,365 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-22 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,365 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-8 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,365 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-2 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,366 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-12 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,366 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-19 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,366 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-14 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,366 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-1 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,366 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-6 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,366 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-7 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:28:36,366 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-21 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] kafka_1 | 2022-04-21 13:29:06,878 - INFO [data-plane-kafka-request-handler-1:Logging@66] - [GroupCoordinator 1]: Preparing to rebalance group console-consumer-20391 in state PreparingRebalance with old generation 1 (__consumer_offsets-46) (reason: Removing member console-consumer-2be32988-3279-4264-9bdf-3db186b86bb5 on LeaveGroup) kafka_1 | 2022-04-21 13:29:06,879 - INFO [data-plane-kafka-request-handler-1:Logging@66] - [GroupCoordinator 1]: Group console-consumer-20391 with generation 2 is now empty (__consumer_offsets-46) kafka_1 | 2022-04-21 13:29:06,881 - INFO [data-plane-kafka-request-handler-1:Logging@66] - [GroupCoordinator 1]: Member MemberMetadata(memberId=console-consumer-2be32988-3279-4264-9bdf-3db186b86bb5, groupInstanceId=None, clientId=console-consumer, clientHost=/172.19.0.1, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, cooperative-sticky)) has left group console-consumer-20391 through explicit `LeaveGroup` request connect_1 | 2022-04-21 13:29:31,510 INFO || [AdminClient clientId=adminclient-8] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:29:31,903 INFO || [Worker clientId=connect-1, groupId=1] Resetting the last seen epoch of partition contact_db.splio3_data.qa_journey-0 to 0 since the associated topicId changed from null to Q6RwfP6LTO2l1sZg4ECt3A [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 13:29:36,373 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:30:36,373 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:31:36,374 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:31:36,458 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:32:36,374 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:33:31,770 INFO || [Producer clientId=producer-1] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:33:31,906 INFO || [Consumer clientId=consumer-1-2, groupId=1] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:33:31,931 INFO || [Worker clientId=connect-1, groupId=1] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:33:31,959 INFO || [Producer clientId=producer-2] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:33:31,959 INFO || [Consumer clientId=consumer-1-1, groupId=1] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:33:31,997 INFO || [Consumer clientId=consumer-1-3, groupId=1] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:33:32,102 INFO || [Producer clientId=producer-3] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:33:36,375 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] kafka_1 | 2022-04-21 13:34:29,238 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Group contact_db-dbhistory transitioned to Dead in generation 4 kafka_1 | 2022-04-21 13:34:29,246 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Group console-consumer-20391 transitioned to Dead in generation 2 kafka_1 | 2022-04-21 13:34:29,247 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Group console-consumer-83427 transitioned to Dead in generation 2 connect_1 | 2022-04-21 13:34:31,539 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:34:36,376 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:35:36,377 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:35:36,607 INFO || [Producer clientId=connector-producer-kafka-contact-connector-0] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:35:36,663 INFO || [Producer clientId=contact_db-dbhistory] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] kafka_1 | 2022-04-21 13:35:54,084 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Dynamic member with unknown member id joins group console-consumer-97056 in Empty state. Created a new member id console-consumer-705c98c5-287e-4c69-ab28-2135b0edd77b and request the member to rejoin with this id. kafka_1 | 2022-04-21 13:35:54,087 - INFO [data-plane-kafka-request-handler-5:Logging@66] - [GroupCoordinator 1]: Preparing to rebalance group console-consumer-97056 in state PreparingRebalance with old generation 0 (__consumer_offsets-28) (reason: Adding new member console-consumer-705c98c5-287e-4c69-ab28-2135b0edd77b with group instance id None) kafka_1 | 2022-04-21 13:35:54,088 - INFO [executor-Rebalance:Logging@66] - [GroupCoordinator 1]: Stabilized group console-consumer-97056 generation 1 (__consumer_offsets-28) with 1 members kafka_1 | 2022-04-21 13:35:54,095 - INFO [data-plane-kafka-request-handler-4:Logging@66] - [GroupCoordinator 1]: Assignment received from leader console-consumer-705c98c5-287e-4c69-ab28-2135b0edd77b for group console-consumer-97056 for generation 1. The group has 1 members, 0 of which are static. connect_1 | 2022-04-21 13:36:36,377 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:36:36,558 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:37:36,378 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:38:36,378 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:39:31,744 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:39:36,379 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:40:36,379 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:41:36,380 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:41:36,762 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:42:36,381 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:43:36,381 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:44:31,948 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:44:36,382 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:45:36,382 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:46:36,383 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:46:36,967 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:47:36,383 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:48:36,384 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:49:32,154 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:49:36,384 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:50:36,385 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:51:36,386 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:51:37,171 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:52:36,386 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:53:36,386 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:54:32,281 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:54:36,387 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:55:36,388 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:56:36,388 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:56:37,374 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:57:36,389 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:58:36,389 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 13:59:32,482 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 13:59:36,390 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:00:36,391 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:01:36,394 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:01:37,578 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:02:36,395 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:03:36,395 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:04:32,687 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:04:36,396 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:05:36,396 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:06:36,397 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:06:37,782 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:07:36,397 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:08:36,398 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:09:17,254 INFO || Successfully tested connection for jdbc:mysql://172.19.0.3:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&zeroDateTimeBehavior=CONVERT_TO_NULL&connectTimeout=30000 with user 'root' [io.debezium.connector.mysql.MySqlConnector] connect_1 | 2022-04-21 14:09:17,256 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] connect_1 | 2022-04-21 14:09:17,257 INFO || AbstractConfig values: connect_1 | [org.apache.kafka.common.config.AbstractConfig] connect_1 | 2022-04-21 14:09:17,259 INFO || [Producer clientId=producer-3] Resetting the last seen epoch of partition my_connect_configs-0 to 0 since the associated topicId changed from null to fML0POG8THqTjJyfWIg5aQ [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:09:17,265 INFO || [Worker clientId=connect-1, groupId=1] Connector kafka-contact-connector config updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 14:09:17,266 INFO || [Worker clientId=connect-1, groupId=1] Handling connector-only config update by restarting connector kafka-contact-connector [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 14:09:17,266 INFO || Stopping connector kafka-contact-connector [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 14:09:17,266 INFO || Scheduled shutdown for WorkerConnector{id=kafka-contact-connector} [org.apache.kafka.connect.runtime.WorkerConnector] connect_1 | 2022-04-21 14:09:17,269 INFO || [Producer clientId=producer-2] Resetting the last seen epoch of partition my_connect_statuses-0 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:09:17,271 INFO || [Producer clientId=producer-2] Resetting the last seen epoch of partition my_connect_statuses-1 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:09:17,271 INFO || [Producer clientId=producer-2] Resetting the last seen epoch of partition my_connect_statuses-4 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:09:17,271 INFO || [Producer clientId=producer-2] Resetting the last seen epoch of partition my_connect_statuses-2 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:09:17,271 INFO || [Producer clientId=producer-2] Resetting the last seen epoch of partition my_connect_statuses-3 to 0 since the associated topicId changed from null to aYxjrvdDTam-gnQaJPAszA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:09:17,271 INFO || Completed shutdown for WorkerConnector{id=kafka-contact-connector} [org.apache.kafka.connect.runtime.WorkerConnector] connect_1 | 2022-04-21 14:09:17,272 INFO || [Worker clientId=connect-1, groupId=1] Starting connector kafka-contact-connector [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 14:09:17,272 INFO || Creating connector kafka-contact-connector of type io.debezium.connector.mysql.MySqlConnector [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 14:09:17,273 INFO || SourceConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig] connect_1 | 2022-04-21 14:09:17,273 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | transforms.Reroute.key.enforce.uniqueness = true connect_1 | transforms.Reroute.key.field.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.key.field.replacement = $1 connect_1 | transforms.Reroute.negate = false connect_1 | transforms.Reroute.predicate = connect_1 | transforms.Reroute.topic.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.topic.replacement = contact.debezium.changes connect_1 | transforms.Reroute.type = class io.debezium.transforms.ByLogicalTableRouter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 14:09:17,274 INFO || EnrichedSourceConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.default.exclude = [] connect_1 | topic.creation.default.include = [.*] connect_1 | topic.creation.default.partitions = 1 connect_1 | topic.creation.default.replication.factor = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig] connect_1 | 2022-04-21 14:09:17,274 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.default.exclude = [] connect_1 | topic.creation.default.include = [.*] connect_1 | topic.creation.default.partitions = 1 connect_1 | topic.creation.default.replication.factor = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | transforms.Reroute.key.enforce.uniqueness = true connect_1 | transforms.Reroute.key.field.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.key.field.replacement = $1 connect_1 | transforms.Reroute.negate = false connect_1 | transforms.Reroute.predicate = connect_1 | transforms.Reroute.topic.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.topic.replacement = contact.debezium.changes connect_1 | transforms.Reroute.type = class io.debezium.transforms.ByLogicalTableRouter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 14:09:17,275 INFO || Instantiated connector kafka-contact-connector with version 1.9.0.Final of type class io.debezium.connector.mysql.MySqlConnector [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 14:09:17,275 INFO || Finished creating connector kafka-contact-connector [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 14:09:17,278 INFO || SourceConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig] connect_1 | 2022-04-21 14:09:17,279 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | transforms.Reroute.key.enforce.uniqueness = true connect_1 | transforms.Reroute.key.field.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.key.field.replacement = $1 connect_1 | transforms.Reroute.negate = false connect_1 | transforms.Reroute.predicate = connect_1 | transforms.Reroute.topic.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.topic.replacement = contact.debezium.changes connect_1 | transforms.Reroute.type = class io.debezium.transforms.ByLogicalTableRouter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 14:09:17,280 INFO || EnrichedSourceConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.default.exclude = [] connect_1 | topic.creation.default.include = [.*] connect_1 | topic.creation.default.partitions = 1 connect_1 | topic.creation.default.replication.factor = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig] connect_1 | 2022-04-21 14:09:17,281 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.default.exclude = [] connect_1 | topic.creation.default.include = [.*] connect_1 | topic.creation.default.partitions = 1 connect_1 | topic.creation.default.replication.factor = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | transforms.Reroute.key.enforce.uniqueness = true connect_1 | transforms.Reroute.key.field.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.key.field.replacement = $1 connect_1 | transforms.Reroute.negate = false connect_1 | transforms.Reroute.predicate = connect_1 | transforms.Reroute.topic.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.topic.replacement = contact.debezium.changes connect_1 | transforms.Reroute.type = class io.debezium.transforms.ByLogicalTableRouter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 14:09:17,294 INFO || [Worker clientId=connect-1, groupId=1] Tasks [kafka-contact-connector-0] configs updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 14:09:17,295 INFO || [Worker clientId=connect-1, groupId=1] Handling task config update by restarting tasks [kafka-contact-connector-0] [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 14:09:17,295 INFO || Stopping task kafka-contact-connector-0 [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 14:09:17,303 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:09:17,303 INFO || Stopping down connector [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,392 INFO MySQL|contact_db|streaming Finished streaming [io.debezium.pipeline.ChangeEventSourceCoordinator] connect_1 | 2022-04-21 14:09:17,392 INFO MySQL|contact_db|binlog Stopped reading binlog after 0 events, last recorded offset: {transaction_id=null, ts_sec=1650547697, file=mysql-bin.000005, pos=1285, server_id=223344, event=1} [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] connect_1 | 2022-04-21 14:09:17,395 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] connect_1 | 2022-04-21 14:09:17,395 INFO || [Producer clientId=contact_db-dbhistory] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer] connect_1 | 2022-04-21 14:09:17,398 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,398 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,398 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,398 INFO || App info kafka.producer for contact_db-dbhistory unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,398 INFO || [Producer clientId=connector-producer-kafka-contact-connector-0] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer] connect_1 | 2022-04-21 14:09:17,401 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,401 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,401 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,401 INFO || App info kafka.producer for connector-producer-kafka-contact-connector-0 unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,402 INFO || App info kafka.admin.client for connector-adminclient-kafka-contact-connector-0 unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,404 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,404 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,404 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,406 INFO || [Worker clientId=connect-1, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] connect_1 | 2022-04-21 14:09:17,406 INFO || [Worker clientId=connect-1, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] kafka_1 | 2022-04-21 14:09:17,408 - INFO [data-plane-kafka-request-handler-0:Logging@66] - [GroupCoordinator 1]: Preparing to rebalance group 1 in state PreparingRebalance with old generation 10 (__consumer_offsets-49) (reason: Leader connect-1-dde6a537-3dba-4000-9646-5d73d3d1e367 re-joining group during Stable) kafka_1 | 2022-04-21 14:09:17,409 - INFO [data-plane-kafka-request-handler-0:Logging@66] - [GroupCoordinator 1]: Stabilized group 1 generation 11 (__consumer_offsets-49) with 1 members connect_1 | 2022-04-21 14:09:17,410 INFO || [Worker clientId=connect-1, groupId=1] Successfully joined group with generation Generation{generationId=11, memberId='connect-1-dde6a537-3dba-4000-9646-5d73d3d1e367', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] kafka_1 | 2022-04-21 14:09:17,412 - INFO [data-plane-kafka-request-handler-6:Logging@66] - [GroupCoordinator 1]: Assignment received from leader connect-1-dde6a537-3dba-4000-9646-5d73d3d1e367 for group 1 for generation 11. The group has 1 members, 0 of which are static. connect_1 | 2022-04-21 14:09:17,414 INFO || [Worker clientId=connect-1, groupId=1] Successfully synced group in generation Generation{generationId=11, memberId='connect-1-dde6a537-3dba-4000-9646-5d73d3d1e367', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] connect_1 | 2022-04-21 14:09:17,414 INFO || [Worker clientId=connect-1, groupId=1] Joined group at generation 11 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-dde6a537-3dba-4000-9646-5d73d3d1e367', leaderUrl='http://172.19.0.5:8083/', offset=13, connectorIds=[kafka-contact-connector], taskIds=[kafka-contact-connector-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 14:09:17,414 INFO || [Worker clientId=connect-1, groupId=1] Starting connectors and tasks using config offset 13 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 14:09:17,415 INFO || [Worker clientId=connect-1, groupId=1] Starting task kafka-contact-connector-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 14:09:17,415 INFO || Creating task kafka-contact-connector-0 [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 14:09:17,416 INFO || ConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | transforms = [Reroute] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig] connect_1 | 2022-04-21 14:09:17,417 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | transforms = [Reroute] connect_1 | transforms.Reroute.key.enforce.uniqueness = true connect_1 | transforms.Reroute.key.field.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.key.field.replacement = $1 connect_1 | transforms.Reroute.negate = false connect_1 | transforms.Reroute.predicate = connect_1 | transforms.Reroute.topic.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.topic.replacement = contact.debezium.changes connect_1 | transforms.Reroute.type = class io.debezium.transforms.ByLogicalTableRouter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 14:09:17,417 INFO || TaskConfig values: connect_1 | task.class = class io.debezium.connector.mysql.MySqlConnectorTask connect_1 | [org.apache.kafka.connect.runtime.TaskConfig] connect_1 | 2022-04-21 14:09:17,417 INFO || Instantiated task kafka-contact-connector-0 with version 1.9.0.Final of type io.debezium.connector.mysql.MySqlConnectorTask [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 14:09:17,417 INFO || JsonConverterConfig values: connect_1 | converter.type = key connect_1 | decimal.format = BASE64 connect_1 | schemas.cache.size = 1000 connect_1 | schemas.enable = false connect_1 | [org.apache.kafka.connect.json.JsonConverterConfig] connect_1 | 2022-04-21 14:09:17,418 INFO || JsonConverterConfig values: connect_1 | converter.type = value connect_1 | decimal.format = BASE64 connect_1 | schemas.cache.size = 1000 connect_1 | schemas.enable = false connect_1 | [org.apache.kafka.connect.json.JsonConverterConfig] connect_1 | 2022-04-21 14:09:17,418 INFO || Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task kafka-contact-connector-0 using the connector config [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 14:09:17,418 INFO || Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task kafka-contact-connector-0 using the connector config [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 14:09:17,418 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task kafka-contact-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 14:09:17,419 INFO || SourceConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig] connect_1 | 2022-04-21 14:09:17,419 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | transforms.Reroute.key.enforce.uniqueness = true connect_1 | transforms.Reroute.key.field.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.key.field.replacement = $1 connect_1 | transforms.Reroute.negate = false connect_1 | transforms.Reroute.predicate = connect_1 | transforms.Reroute.topic.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.topic.replacement = contact.debezium.changes connect_1 | transforms.Reroute.type = class io.debezium.transforms.ByLogicalTableRouter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 14:09:17,420 INFO || EnrichedSourceConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.default.exclude = [] connect_1 | topic.creation.default.include = [.*] connect_1 | topic.creation.default.partitions = 1 connect_1 | topic.creation.default.replication.factor = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig] connect_1 | 2022-04-21 14:09:17,421 INFO || EnrichedConnectorConfig values: connect_1 | config.action.reload = restart connect_1 | connector.class = io.debezium.connector.mysql.MySqlConnector connect_1 | errors.log.enable = false connect_1 | errors.log.include.messages = false connect_1 | errors.retry.delay.max.ms = 60000 connect_1 | errors.retry.timeout = 0 connect_1 | errors.tolerance = none connect_1 | header.converter = null connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | name = kafka-contact-connector connect_1 | predicates = [] connect_1 | tasks.max = 1 connect_1 | topic.creation.default.exclude = [] connect_1 | topic.creation.default.include = [.*] connect_1 | topic.creation.default.partitions = 1 connect_1 | topic.creation.default.replication.factor = 1 connect_1 | topic.creation.groups = [] connect_1 | transforms = [Reroute] connect_1 | transforms.Reroute.key.enforce.uniqueness = true connect_1 | transforms.Reroute.key.field.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.key.field.replacement = $1 connect_1 | transforms.Reroute.negate = false connect_1 | transforms.Reroute.predicate = connect_1 | transforms.Reroute.topic.regex = contact_db\.splio3_data\.(.*) connect_1 | transforms.Reroute.topic.replacement = contact.debezium.changes connect_1 | transforms.Reroute.type = class io.debezium.transforms.ByLogicalTableRouter connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] connect_1 | 2022-04-21 14:09:17,421 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{io.debezium.transforms.ByLogicalTableRouter} [org.apache.kafka.connect.runtime.Worker] connect_1 | 2022-04-21 14:09:17,422 INFO || ProducerConfig values: connect_1 | acks = -1 connect_1 | batch.size = 16384 connect_1 | bootstrap.servers = [kafka:9092] connect_1 | buffer.memory = 33554432 connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = connector-producer-kafka-contact-connector-0 connect_1 | compression.type = none connect_1 | connections.max.idle.ms = 540000 connect_1 | delivery.timeout.ms = 2147483647 connect_1 | enable.idempotence = true connect_1 | interceptor.classes = [] connect_1 | key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer connect_1 | linger.ms = 0 connect_1 | max.block.ms = 9223372036854775807 connect_1 | max.in.flight.requests.per.connection = 1 connect_1 | max.request.size = 1048576 connect_1 | metadata.max.age.ms = 300000 connect_1 | metadata.max.idle.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner connect_1 | receive.buffer.bytes = 32768 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 2147483647 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | transaction.timeout.ms = 60000 connect_1 | transactional.id = null connect_1 | value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer connect_1 | [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 14:09:17,426 WARN || The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 14:09:17,426 WARN || The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 14:09:17,427 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,427 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,427 INFO || Kafka startTimeMs: 1650550157427 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,428 INFO || AdminClientConfig values: connect_1 | bootstrap.servers = [kafka:9092] connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = connector-adminclient-kafka-contact-connector-0 connect_1 | connections.max.idle.ms = 300000 connect_1 | default.api.timeout.ms = 60000 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 2147483647 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,430 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,430 WARN || The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,430 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,430 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,430 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,431 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,431 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,431 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,431 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,431 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,431 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,431 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,431 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,431 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,431 WARN || The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,431 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,431 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,431 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,431 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,431 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,431 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,431 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,431 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,431 INFO || Kafka startTimeMs: 1650550157431 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,432 INFO || [Producer clientId=connector-producer-kafka-contact-connector-0] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:09:17,432 INFO || [Worker clientId=connect-1, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 14:09:17,434 INFO || Starting MySqlConnectorTask with configuration: [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || connector.class = io.debezium.connector.mysql.MySqlConnector [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || snapshot.locking.mode = none [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || topic.creation.default.partitions = 1 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || tasks.max = 1 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || database.history.kafka.topic = contact_db.schema-changes [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || transforms.Reroute.key.field.name = universe [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || transforms = Reroute [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || include.schema.changes = true [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || transforms.Reroute.topic.replacement = contact.debezium.changes [io.debezium.connector.common.BaseSourceTask] mysql_1 | mbind: Operation not permitted connect_1 | 2022-04-21 14:09:17,434 INFO || topic.creation.default.replication.factor = 1 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || database.history.store.only.captured.tables.ddl = true [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || value.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || key.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || database.user = root [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || transforms.Reroute.type = io.debezium.transforms.ByLogicalTableRouter [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || database.server.id = 438567 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || topic.creation.default.cleanup.policy = compact [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || database.history.kafka.bootstrap.servers = 172.19.0.4:9092 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || database.server.name = contact_db [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || transforms.Reroute.topic.regex = contact_db\.splio3_data\.(.*) [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || database.port = 3306 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || transforms.Reroute.key.field.replacement = $1 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || key.converter.schemas.enable = false [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || column.exclude.list = .*\.last_.*,.*\.nouverts.*,.*\.nclicks.*,.*\.nenvois,.*\.nbounces,.*\.nbounces_sms,.*\.nclickssms,.*\.nx,.*\.nsms,.*\.nclicksms,.*\.ntransfo,.*\.npurchases,.*\.sommeca [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || database.serverTimezone = Europe/Paris [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || task.class = io.debezium.connector.mysql.MySqlConnectorTask [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || database.hostname = 172.19.0.3 [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || database.connectionTimeZone = Europe/Paris [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || database.password = ******** [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || value.converter.schemas.enable = false [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || name = kafka-contact-connector [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || transforms.Reroute.key.field.regex = contact_db\.splio3_data\.(.*) [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || table.include.list = splio3_data\.(.*) [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || database.include.list = splio3_data [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,434 INFO || snapshot.mode = schema_only [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,449 INFO || Found previous partition offset MySqlPartition [sourcePartition={server=contact_db}]: {transaction_id=null, file=mysql-bin.000005, pos=236, row=1, event=2} [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:09:17,456 INFO || KafkaDatabaseHistory Consumer config: {key.deserializer=org.apache.kafka.common.serialization.StringDeserializer, value.deserializer=org.apache.kafka.common.serialization.StringDeserializer, enable.auto.commit=false, group.id=contact_db-dbhistory, bootstrap.servers=172.19.0.4:9092, fetch.min.bytes=1, session.timeout.ms=10000, auto.offset.reset=earliest, client.id=contact_db-dbhistory} [io.debezium.relational.history.KafkaDatabaseHistory] connect_1 | 2022-04-21 14:09:17,456 INFO || KafkaDatabaseHistory Producer config: {retries=1, value.serializer=org.apache.kafka.common.serialization.StringSerializer, acks=1, batch.size=32768, max.block.ms=10000, bootstrap.servers=172.19.0.4:9092, buffer.memory=1048576, key.serializer=org.apache.kafka.common.serialization.StringSerializer, client.id=contact_db-dbhistory, linger.ms=0} [io.debezium.relational.history.KafkaDatabaseHistory] connect_1 | 2022-04-21 14:09:17,456 INFO || Requested thread factory for connector MySqlConnector, id = contact_db named = db-history-config-check [io.debezium.util.Threads] connect_1 | 2022-04-21 14:09:17,457 INFO || ProducerConfig values: connect_1 | acks = 1 connect_1 | batch.size = 32768 connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | buffer.memory = 1048576 connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory connect_1 | compression.type = none connect_1 | connections.max.idle.ms = 540000 connect_1 | delivery.timeout.ms = 120000 connect_1 | enable.idempotence = true connect_1 | interceptor.classes = [] connect_1 | key.serializer = class org.apache.kafka.common.serialization.StringSerializer connect_1 | linger.ms = 0 connect_1 | max.block.ms = 10000 connect_1 | max.in.flight.requests.per.connection = 5 connect_1 | max.request.size = 1048576 connect_1 | metadata.max.age.ms = 300000 connect_1 | metadata.max.idle.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner connect_1 | receive.buffer.bytes = 32768 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 1 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | transaction.timeout.ms = 60000 connect_1 | transactional.id = null connect_1 | value.serializer = class org.apache.kafka.common.serialization.StringSerializer connect_1 | [org.apache.kafka.clients.producer.ProducerConfig] connect_1 | 2022-04-21 14:09:17,460 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,460 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,460 INFO || Kafka startTimeMs: 1650550157460 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,461 INFO || Closing connection before starting schema recovery [io.debezium.connector.mysql.MySqlConnectorTask] connect_1 | 2022-04-21 14:09:17,462 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] connect_1 | 2022-04-21 14:09:17,463 INFO || ConsumerConfig values: connect_1 | allow.auto.create.topics = true connect_1 | auto.commit.interval.ms = 5000 connect_1 | auto.offset.reset = earliest connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | check.crcs = true connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory connect_1 | client.rack = connect_1 | connections.max.idle.ms = 540000 connect_1 | default.api.timeout.ms = 60000 connect_1 | enable.auto.commit = false connect_1 | exclude.internal.topics = true connect_1 | fetch.max.bytes = 52428800 connect_1 | fetch.max.wait.ms = 500 connect_1 | fetch.min.bytes = 1 connect_1 | group.id = contact_db-dbhistory connect_1 | group.instance.id = null connect_1 | heartbeat.interval.ms = 3000 connect_1 | interceptor.classes = [] connect_1 | internal.leave.group.on.close = true connect_1 | internal.throw.on.fetch.stable.offset.unsupported = false connect_1 | isolation.level = read_uncommitted connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | max.partition.fetch.bytes = 1048576 connect_1 | max.poll.interval.ms = 300000 connect_1 | max.poll.records = 500 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 10000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 14:09:17,465 INFO || [Producer clientId=contact_db-dbhistory] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:09:17,466 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,466 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,466 INFO || Kafka startTimeMs: 1650550157466 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,471 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:09:17,473 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting generation due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 14:09:17,473 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 14:09:17,473 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,473 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,473 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,474 INFO || App info kafka.consumer for contact_db-dbhistory unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,474 INFO || ConsumerConfig values: connect_1 | allow.auto.create.topics = true connect_1 | auto.commit.interval.ms = 5000 connect_1 | auto.offset.reset = earliest connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | check.crcs = true connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory connect_1 | client.rack = connect_1 | connections.max.idle.ms = 540000 connect_1 | default.api.timeout.ms = 60000 connect_1 | enable.auto.commit = false connect_1 | exclude.internal.topics = true connect_1 | fetch.max.bytes = 52428800 connect_1 | fetch.max.wait.ms = 500 connect_1 | fetch.min.bytes = 1 connect_1 | group.id = contact_db-dbhistory connect_1 | group.instance.id = null connect_1 | heartbeat.interval.ms = 3000 connect_1 | interceptor.classes = [] connect_1 | internal.leave.group.on.close = true connect_1 | internal.throw.on.fetch.stable.offset.unsupported = false connect_1 | isolation.level = read_uncommitted connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | max.partition.fetch.bytes = 1048576 connect_1 | max.poll.interval.ms = 300000 connect_1 | max.poll.records = 500 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 10000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 14:09:17,477 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,477 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,477 INFO || Kafka startTimeMs: 1650550157477 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,477 INFO || Creating thread debezium-mysqlconnector-contact_db-db-history-config-check [io.debezium.util.Threads] connect_1 | 2022-04-21 14:09:17,478 INFO || AdminClientConfig values: connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory-topic-check connect_1 | connections.max.idle.ms = 300000 connect_1 | default.api.timeout.ms = 60000 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retries = 1 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,480 WARN || The configuration 'value.serializer' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,480 WARN || The configuration 'acks' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,480 WARN || The configuration 'batch.size' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,480 WARN || The configuration 'max.block.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,480 WARN || The configuration 'buffer.memory' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,480 WARN || The configuration 'key.serializer' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,480 WARN || The configuration 'linger.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] connect_1 | 2022-04-21 14:09:17,480 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,480 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,480 INFO || Kafka startTimeMs: 1650550157480 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,482 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting the last seen epoch of partition contact_db.schema-changes-0 to 0 since the associated topicId changed from null to I-6FAdvZT2KHBSXH0d61iA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:09:17,482 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:09:17,486 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting generation due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 14:09:17,486 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 14:09:17,486 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,486 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,486 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,487 INFO || App info kafka.consumer for contact_db-dbhistory unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,487 INFO || ConsumerConfig values: connect_1 | allow.auto.create.topics = true connect_1 | auto.commit.interval.ms = 5000 connect_1 | auto.offset.reset = earliest connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | check.crcs = true connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory connect_1 | client.rack = connect_1 | connections.max.idle.ms = 540000 connect_1 | default.api.timeout.ms = 60000 connect_1 | enable.auto.commit = false connect_1 | exclude.internal.topics = true connect_1 | fetch.max.bytes = 52428800 connect_1 | fetch.max.wait.ms = 500 connect_1 | fetch.min.bytes = 1 connect_1 | group.id = contact_db-dbhistory connect_1 | group.instance.id = null connect_1 | heartbeat.interval.ms = 3000 connect_1 | interceptor.classes = [] connect_1 | internal.leave.group.on.close = true connect_1 | internal.throw.on.fetch.stable.offset.unsupported = false connect_1 | isolation.level = read_uncommitted connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | max.partition.fetch.bytes = 1048576 connect_1 | max.poll.interval.ms = 300000 connect_1 | max.poll.records = 500 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 10000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 14:09:17,490 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,490 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,490 INFO || Kafka startTimeMs: 1650550157490 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,490 INFO || Database history topic 'contact_db.schema-changes' has correct settings [io.debezium.relational.history.KafkaDatabaseHistory] connect_1 | 2022-04-21 14:09:17,491 INFO || App info kafka.admin.client for contact_db-dbhistory-topic-check unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,492 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,492 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,492 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,492 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:09:17,493 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting generation due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 14:09:17,493 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 14:09:17,494 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,494 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,494 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,494 INFO || App info kafka.consumer for contact_db-dbhistory unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,495 INFO || ConsumerConfig values: connect_1 | allow.auto.create.topics = true connect_1 | auto.commit.interval.ms = 5000 connect_1 | auto.offset.reset = earliest connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | check.crcs = true connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory connect_1 | client.rack = connect_1 | connections.max.idle.ms = 540000 connect_1 | default.api.timeout.ms = 60000 connect_1 | enable.auto.commit = false connect_1 | exclude.internal.topics = true connect_1 | fetch.max.bytes = 52428800 connect_1 | fetch.max.wait.ms = 500 connect_1 | fetch.min.bytes = 1 connect_1 | group.id = contact_db-dbhistory connect_1 | group.instance.id = null connect_1 | heartbeat.interval.ms = 3000 connect_1 | interceptor.classes = [] connect_1 | internal.leave.group.on.close = true connect_1 | internal.throw.on.fetch.stable.offset.unsupported = false connect_1 | isolation.level = read_uncommitted connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | max.partition.fetch.bytes = 1048576 connect_1 | max.poll.interval.ms = 300000 connect_1 | max.poll.records = 500 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 10000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 14:09:17,496 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,497 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,497 INFO || Kafka startTimeMs: 1650550157496 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,499 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting the last seen epoch of partition contact_db.schema-changes-0 to 0 since the associated topicId changed from null to I-6FAdvZT2KHBSXH0d61iA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:09:17,499 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:09:17,503 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting generation due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 14:09:17,503 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 14:09:17,504 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,504 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,504 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,505 INFO || App info kafka.consumer for contact_db-dbhistory unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,505 INFO || Started database history recovery [io.debezium.relational.history.DatabaseHistoryMetrics] connect_1 | 2022-04-21 14:09:17,505 INFO || ConsumerConfig values: connect_1 | allow.auto.create.topics = true connect_1 | auto.commit.interval.ms = 5000 connect_1 | auto.offset.reset = earliest connect_1 | bootstrap.servers = [172.19.0.4:9092] connect_1 | check.crcs = true connect_1 | client.dns.lookup = use_all_dns_ips connect_1 | client.id = contact_db-dbhistory connect_1 | client.rack = connect_1 | connections.max.idle.ms = 540000 connect_1 | default.api.timeout.ms = 60000 connect_1 | enable.auto.commit = false connect_1 | exclude.internal.topics = true connect_1 | fetch.max.bytes = 52428800 connect_1 | fetch.max.wait.ms = 500 connect_1 | fetch.min.bytes = 1 connect_1 | group.id = contact_db-dbhistory connect_1 | group.instance.id = null connect_1 | heartbeat.interval.ms = 3000 connect_1 | interceptor.classes = [] connect_1 | internal.leave.group.on.close = true connect_1 | internal.throw.on.fetch.stable.offset.unsupported = false connect_1 | isolation.level = read_uncommitted connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | max.partition.fetch.bytes = 1048576 connect_1 | max.poll.interval.ms = 300000 connect_1 | max.poll.records = 500 connect_1 | metadata.max.age.ms = 300000 connect_1 | metric.reporters = [] connect_1 | metrics.num.samples = 2 connect_1 | metrics.recording.level = INFO connect_1 | metrics.sample.window.ms = 30000 connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] connect_1 | receive.buffer.bytes = 65536 connect_1 | reconnect.backoff.max.ms = 1000 connect_1 | reconnect.backoff.ms = 50 connect_1 | request.timeout.ms = 30000 connect_1 | retry.backoff.ms = 100 connect_1 | sasl.client.callback.handler.class = null connect_1 | sasl.jaas.config = null connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit connect_1 | sasl.kerberos.min.time.before.relogin = 60000 connect_1 | sasl.kerberos.service.name = null connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05 connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 connect_1 | sasl.login.callback.handler.class = null connect_1 | sasl.login.class = null connect_1 | sasl.login.connect.timeout.ms = null connect_1 | sasl.login.read.timeout.ms = null connect_1 | sasl.login.refresh.buffer.seconds = 300 connect_1 | sasl.login.refresh.min.period.seconds = 60 connect_1 | sasl.login.refresh.window.factor = 0.8 connect_1 | sasl.login.refresh.window.jitter = 0.05 connect_1 | sasl.login.retry.backoff.max.ms = 10000 connect_1 | sasl.login.retry.backoff.ms = 100 connect_1 | sasl.mechanism = GSSAPI connect_1 | sasl.oauthbearer.clock.skew.seconds = 30 connect_1 | sasl.oauthbearer.expected.audience = null connect_1 | sasl.oauthbearer.expected.issuer = null connect_1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 connect_1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 connect_1 | sasl.oauthbearer.jwks.endpoint.url = null connect_1 | sasl.oauthbearer.scope.claim.name = scope connect_1 | sasl.oauthbearer.sub.claim.name = sub connect_1 | sasl.oauthbearer.token.endpoint.url = null connect_1 | security.protocol = PLAINTEXT connect_1 | security.providers = null connect_1 | send.buffer.bytes = 131072 connect_1 | session.timeout.ms = 10000 connect_1 | socket.connection.setup.timeout.max.ms = 30000 connect_1 | socket.connection.setup.timeout.ms = 10000 connect_1 | ssl.cipher.suites = null connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] connect_1 | ssl.endpoint.identification.algorithm = https connect_1 | ssl.engine.factory.class = null connect_1 | ssl.key.password = null connect_1 | ssl.keymanager.algorithm = SunX509 connect_1 | ssl.keystore.certificate.chain = null connect_1 | ssl.keystore.key = null connect_1 | ssl.keystore.location = null connect_1 | ssl.keystore.password = null connect_1 | ssl.keystore.type = JKS connect_1 | ssl.protocol = TLSv1.3 connect_1 | ssl.provider = null connect_1 | ssl.secure.random.implementation = null connect_1 | ssl.trustmanager.algorithm = PKIX connect_1 | ssl.truststore.certificates = null connect_1 | ssl.truststore.location = null connect_1 | ssl.truststore.password = null connect_1 | ssl.truststore.type = JKS connect_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] connect_1 | 2022-04-21 14:09:17,508 INFO || Kafka version: 3.1.0 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,508 INFO || Kafka commitId: 37edeed0777bacb3 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,508 INFO || Kafka startTimeMs: 1650550157507 [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,508 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Subscribed to topic(s): contact_db.schema-changes [org.apache.kafka.clients.consumer.KafkaConsumer] connect_1 | 2022-04-21 14:09:17,510 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting the last seen epoch of partition contact_db.schema-changes-0 to 0 since the associated topicId changed from null to I-6FAdvZT2KHBSXH0d61iA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:09:17,510 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Cluster ID: _Fc9d2urQwKYMwlE0QT5Dw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:09:17,513 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Discovered group coordinator 172.19.0.4:9092 (id: 2147483646 rack: null) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 14:09:17,514 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] kafka_1 | 2022-04-21 14:09:17,515 - INFO [data-plane-kafka-request-handler-3:Logging@66] - [GroupCoordinator 1]: Dynamic member with unknown member id joins group contact_db-dbhistory in Empty state. Created a new member id contact_db-dbhistory-26017f63-285b-48a6-ba45-54e856eee41c and request the member to rejoin with this id. connect_1 | 2022-04-21 14:09:17,516 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Request joining group due to: need to re-join with the given member-id [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 14:09:17,516 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] kafka_1 | 2022-04-21 14:09:17,517 - INFO [data-plane-kafka-request-handler-4:Logging@66] - [GroupCoordinator 1]: Preparing to rebalance group contact_db-dbhistory in state PreparingRebalance with old generation 0 (__consumer_offsets-26) (reason: Adding new member contact_db-dbhistory-26017f63-285b-48a6-ba45-54e856eee41c with group instance id None) kafka_1 | 2022-04-21 14:09:17,518 - INFO [executor-Rebalance:Logging@66] - [GroupCoordinator 1]: Stabilized group contact_db-dbhistory generation 1 (__consumer_offsets-26) with 1 members connect_1 | 2022-04-21 14:09:17,519 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Successfully joined group with generation Generation{generationId=1, memberId='contact_db-dbhistory-26017f63-285b-48a6-ba45-54e856eee41c', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 14:09:17,519 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Finished assignment for group at generation 1: {contact_db-dbhistory-26017f63-285b-48a6-ba45-54e856eee41c=Assignment(partitions=[contact_db.schema-changes-0])} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] kafka_1 | 2022-04-21 14:09:17,520 - INFO [data-plane-kafka-request-handler-1:Logging@66] - [GroupCoordinator 1]: Assignment received from leader contact_db-dbhistory-26017f63-285b-48a6-ba45-54e856eee41c for group contact_db-dbhistory for generation 1. The group has 1 members, 0 of which are static. connect_1 | 2022-04-21 14:09:17,522 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Successfully synced group in generation Generation{generationId=1, memberId='contact_db-dbhistory-26017f63-285b-48a6-ba45-54e856eee41c', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 14:09:17,522 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Notifying assignor about the new Assignment(partitions=[contact_db.schema-changes-0]) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 14:09:17,522 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Adding newly assigned partitions: contact_db.schema-changes-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 14:09:17,523 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Found no committed offset for partition contact_db.schema-changes-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 14:09:17,525 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting offset for partition contact_db.schema-changes-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.19.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] connect_1 | 2022-04-21 14:09:17,559 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Revoke previously assigned partitions contact_db.schema-changes-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 14:09:17,559 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Member contact_db-dbhistory-26017f63-285b-48a6-ba45-54e856eee41c sending LeaveGroup request to coordinator 172.19.0.4:9092 (id: 2147483646 rack: null) due to the consumer is being closed [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 14:09:17,560 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Resetting generation due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] connect_1 | 2022-04-21 14:09:17,560 INFO || [Consumer clientId=contact_db-dbhistory, groupId=contact_db-dbhistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] kafka_1 | 2022-04-21 14:09:17,560 - INFO [data-plane-kafka-request-handler-5:Logging@66] - [GroupCoordinator 1]: Preparing to rebalance group contact_db-dbhistory in state PreparingRebalance with old generation 1 (__consumer_offsets-26) (reason: Removing member contact_db-dbhistory-26017f63-285b-48a6-ba45-54e856eee41c on LeaveGroup) kafka_1 | 2022-04-21 14:09:17,561 - INFO [data-plane-kafka-request-handler-5:Logging@66] - [GroupCoordinator 1]: Group contact_db-dbhistory with generation 2 is now empty (__consumer_offsets-26) kafka_1 | 2022-04-21 14:09:17,562 - INFO [data-plane-kafka-request-handler-5:Logging@66] - [GroupCoordinator 1]: Member MemberMetadata(memberId=contact_db-dbhistory-26017f63-285b-48a6-ba45-54e856eee41c, groupInstanceId=None, clientId=contact_db-dbhistory, clientHost=/172.19.0.5, sessionTimeoutMs=10000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, cooperative-sticky)) has left group contact_db-dbhistory through explicit `LeaveGroup` request connect_1 | 2022-04-21 14:09:17,563 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,563 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,563 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] connect_1 | 2022-04-21 14:09:17,565 INFO || App info kafka.consumer for contact_db-dbhistory unregistered [org.apache.kafka.common.utils.AppInfoParser] connect_1 | 2022-04-21 14:09:17,566 INFO || Finished database history recovery of 2 change(s) in 60 ms [io.debezium.relational.history.DatabaseHistoryMetrics] connect_1 | 2022-04-21 14:09:17,573 INFO || Reconnecting after finishing schema recovery [io.debezium.connector.mysql.MySqlConnectorTask] connect_1 | 2022-04-21 14:09:17,576 INFO || Get all known binlogs from MySQL [io.debezium.connector.mysql.MySqlConnection] connect_1 | 2022-04-21 14:09:17,577 INFO || MySQL has the binlog file 'mysql-bin.000005' required by the connector [io.debezium.connector.mysql.MySqlConnectorTask] connect_1 | 2022-04-21 14:09:17,577 INFO || Requested thread factory for connector MySqlConnector, id = contact_db named = change-event-source-coordinator [io.debezium.util.Threads] connect_1 | 2022-04-21 14:09:17,577 INFO || Creating thread debezium-mysqlconnector-contact_db-change-event-source-coordinator [io.debezium.util.Threads] connect_1 | 2022-04-21 14:09:17,578 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Source task finished initialization and start [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:09:17,578 INFO MySQL|contact_db|snapshot Metrics registered [io.debezium.pipeline.ChangeEventSourceCoordinator] connect_1 | 2022-04-21 14:09:17,578 INFO MySQL|contact_db|snapshot Context created [io.debezium.pipeline.ChangeEventSourceCoordinator] connect_1 | 2022-04-21 14:09:17,578 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Executing source task [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:09:17,579 INFO MySQL|contact_db|snapshot A previous offset indicating a completed snapshot has been found. Neither schema nor data will be snapshotted. [io.debezium.connector.mysql.MySqlSnapshotChangeEventSource] connect_1 | 2022-04-21 14:09:17,579 INFO MySQL|contact_db|snapshot Snapshot ended with SnapshotResult [status=SKIPPED, offset=MySqlOffsetContext [sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, sourceInfo=SourceInfo [currentGtid=null, currentBinlogFilename=mysql-bin.000005, currentBinlogPosition=236, currentRowNumber=0, serverId=0, sourceTime=null, threadId=-1, currentQuery=null, tableIds=[], databaseName=null], snapshotCompleted=false, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0], restartGtidSet=null, currentGtidSet=null, restartBinlogFilename=mysql-bin.000005, restartBinlogPosition=236, restartRowsToSkip=1, restartEventsToSkip=2, currentEventLengthInBytes=0, inTransaction=false, transactionId=null, incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, maximumKey=null]]] [io.debezium.pipeline.ChangeEventSourceCoordinator] connect_1 | 2022-04-21 14:09:17,579 INFO MySQL|contact_db|streaming Requested thread factory for connector MySqlConnector, id = contact_db named = binlog-client [io.debezium.util.Threads] connect_1 | 2022-04-21 14:09:17,580 INFO MySQL|contact_db|streaming Starting streaming [io.debezium.pipeline.ChangeEventSourceCoordinator] mysql_1 | mbind: Operation not permitted connect_1 | 2022-04-21 14:09:17,584 INFO MySQL|contact_db|streaming Skip 2 events on streaming start [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] connect_1 | 2022-04-21 14:09:17,584 INFO MySQL|contact_db|streaming Skip 1 rows on streaming start [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] connect_1 | 2022-04-21 14:09:17,584 INFO MySQL|contact_db|streaming Creating thread debezium-mysqlconnector-contact_db-binlog-client [io.debezium.util.Threads] connect_1 | 2022-04-21 14:09:17,584 INFO MySQL|contact_db|streaming Creating thread debezium-mysqlconnector-contact_db-binlog-client [io.debezium.util.Threads] mysql_1 | mbind: Operation not permitted connect_1 | Apr 21, 2022 2:09:17 PM com.github.shyiko.mysql.binlog.BinaryLogClient connect connect_1 | INFO: Connected to 172.19.0.3:3306 at mysql-bin.000005/236 (sid:438567, cid:19) connect_1 | 2022-04-21 14:09:17,588 INFO MySQL|contact_db|binlog Connected to MySQL binlog at 172.19.0.3:3306, starting at MySqlOffsetContext [sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, sourceInfo=SourceInfo [currentGtid=null, currentBinlogFilename=mysql-bin.000005, currentBinlogPosition=236, currentRowNumber=0, serverId=0, sourceTime=null, threadId=-1, currentQuery=null, tableIds=[], databaseName=null], snapshotCompleted=false, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0], restartGtidSet=null, currentGtidSet=null, restartBinlogFilename=mysql-bin.000005, restartBinlogPosition=236, restartRowsToSkip=1, restartEventsToSkip=2, currentEventLengthInBytes=0, inTransaction=false, transactionId=null, incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, maximumKey=null]] [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] connect_1 | 2022-04-21 14:09:17,588 INFO MySQL|contact_db|streaming Waiting for keepalive thread to start [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] connect_1 | 2022-04-21 14:09:17,588 INFO MySQL|contact_db|binlog Creating thread debezium-mysqlconnector-contact_db-binlog-client [io.debezium.util.Threads] connect_1 | 2022-04-21 14:09:17,688 INFO MySQL|contact_db|streaming Keepalive thread is running [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] connect_1 | 2022-04-21 14:10:17,433 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:11:17,434 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] kafka_1 | 2022-04-21 14:11:35,912 - INFO [data-plane-kafka-request-handler-4:Logging@66] - [GroupCoordinator 1]: Preparing to rebalance group console-consumer-97056 in state PreparingRebalance with old generation 1 (__consumer_offsets-28) (reason: Removing member console-consumer-705c98c5-287e-4c69-ab28-2135b0edd77b on LeaveGroup) kafka_1 | 2022-04-21 14:11:35,913 - INFO [data-plane-kafka-request-handler-4:Logging@66] - [GroupCoordinator 1]: Group console-consumer-97056 with generation 2 is now empty (__consumer_offsets-28) kafka_1 | 2022-04-21 14:11:35,915 - INFO [data-plane-kafka-request-handler-4:Logging@66] - [GroupCoordinator 1]: Member MemberMetadata(memberId=console-consumer-705c98c5-287e-4c69-ab28-2135b0edd77b, groupInstanceId=None, clientId=console-consumer, clientHost=/172.19.0.1, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, cooperative-sticky)) has left group console-consumer-97056 through explicit `LeaveGroup` request kafka_1 | 2022-04-21 14:11:55,702 - INFO [data-plane-kafka-request-handler-6:Logging@66] - [GroupCoordinator 1]: Dynamic member with unknown member id joins group console-consumer-5295 in Empty state. Created a new member id console-consumer-a777c162-5d53-4598-aeec-0b5d8b17d9b3 and request the member to rejoin with this id. kafka_1 | 2022-04-21 14:11:55,705 - INFO [data-plane-kafka-request-handler-2:Logging@66] - [GroupCoordinator 1]: Preparing to rebalance group console-consumer-5295 in state PreparingRebalance with old generation 0 (__consumer_offsets-10) (reason: Adding new member console-consumer-a777c162-5d53-4598-aeec-0b5d8b17d9b3 with group instance id None) kafka_1 | 2022-04-21 14:11:55,706 - INFO [executor-Rebalance:Logging@66] - [GroupCoordinator 1]: Stabilized group console-consumer-5295 generation 1 (__consumer_offsets-10) with 1 members kafka_1 | 2022-04-21 14:11:55,713 - INFO [data-plane-kafka-request-handler-1:Logging@66] - [GroupCoordinator 1]: Assignment received from leader console-consumer-a777c162-5d53-4598-aeec-0b5d8b17d9b3 for group console-consumer-5295 for generation 1. The group has 1 members, 0 of which are static. connect_1 | 2022-04-21 14:12:17,434 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:13:17,435 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:14:11,665 INFO || 1 records sent during previous 00:04:54.247, last recorded offset: {transaction_id=null, ts_sec=1650550451, file=mysql-bin.000005, pos=1364, row=1, server_id=223344, event=2} [io.debezium.connector.common.BaseSourceTask] connect_1 | 2022-04-21 14:14:11,669 INFO || The task will send records to topic 'contact.debezium.changes' for the first time. Checking whether topic exists [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:14:11,675 INFO || Topic 'contact.debezium.changes' already exists. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:14:11,679 INFO || [Producer clientId=connector-producer-kafka-contact-connector-0] Resetting the last seen epoch of partition contact.debezium.changes-0 to 0 since the associated topicId changed from null to 0RlDmGYWRauQMKie6FJ1CA [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,435 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:14:17,438 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:14:17,438 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-0 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,438 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-5 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,438 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-10 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,438 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-20 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,438 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-15 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,438 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-9 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,438 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-11 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,438 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-4 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,438 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-16 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,438 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-17 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,438 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-3 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,438 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-24 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,438 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-23 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,439 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-13 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,439 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-18 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,439 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-22 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,439 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-8 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,439 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-2 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,439 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-12 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,439 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-19 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,439 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-14 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,439 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-1 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,439 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-6 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,439 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-7 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:14:17,439 INFO || [Producer clientId=producer-1] Resetting the last seen epoch of partition my_connect_offsets-21 to 0 since the associated topicId changed from null to oEOpzldgQWWSRVA80nIXfw [org.apache.kafka.clients.Metadata] kafka_1 | 2022-04-21 14:14:29,229 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Group contact_db-dbhistory transitioned to Dead in generation 2 kafka_1 | 2022-04-21 14:14:29,230 - INFO [group-metadata-manager-0:Logging@66] - [GroupMetadataManager brokerId=1] Group console-consumer-97056 transitioned to Dead in generation 2 connect_1 | 2022-04-21 14:14:32,900 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:15:17,445 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:16:17,445 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:17:17,446 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:18:17,446 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:18:17,632 INFO || [Producer clientId=connector-producer-kafka-contact-connector-0] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:18:17,670 INFO || [Producer clientId=contact_db-dbhistory] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:19:17,447 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:19:17,540 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:19:33,102 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:20:17,447 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:21:17,448 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:22:17,448 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:23:17,449 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:24:17,450 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:24:17,742 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] zookeeper_1 | 2022-04-21 14:24:24,960 - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@139] - Purge task started. zookeeper_1 | 2022-04-21 14:24:24,961 - INFO [PurgeTask:FileTxnSnapLog@124] - zookeeper.snapshot.trust.empty : false zookeeper_1 | 2022-04-21 14:24:24,973 - INFO [PurgeTask:PurgeTxnLog@157] - Removing file: Apr 21, 2022, 12:44:44 PM /zookeeper/data/version-2/snapshot.0 zookeeper_1 | Removing file: Apr 21, 2022, 12:44:44 PM /zookeeper/data/version-2/snapshot.0 zookeeper_1 | 2022-04-21 14:24:24,974 - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@145] - Purge task completed. connect_1 | 2022-04-21 14:24:32,153 INFO || [Producer clientId=producer-3] Resetting the last seen epoch of partition my_connect_configs-0 to 0 since the associated topicId changed from null to fML0POG8THqTjJyfWIg5aQ [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 14:24:32,164 INFO || [Worker clientId=connect-1, groupId=1] Session key updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 14:25:17,450 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:26:17,451 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:27:17,452 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:28:17,452 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:29:17,452 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:29:17,941 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:29:33,310 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:30:17,453 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:31:17,454 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:32:17,454 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:33:17,454 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:34:17,455 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:34:18,145 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:34:33,514 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:35:17,456 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:36:17,456 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:37:17,457 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:38:17,457 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:39:17,458 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:39:18,317 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:39:33,714 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:40:17,458 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:41:17,459 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:42:17,460 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:43:17,461 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:44:17,461 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:44:18,518 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:44:33,918 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:45:17,462 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:46:17,462 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:47:17,463 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:48:17,463 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:49:17,464 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:49:18,722 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:49:34,122 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:50:17,464 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:51:17,465 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:52:17,466 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:53:17,466 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:54:17,467 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:54:18,926 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:54:34,326 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:55:17,467 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:56:17,468 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:57:17,469 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:58:17,470 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:59:17,471 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 14:59:19,129 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 14:59:34,474 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 15:00:17,472 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:01:17,479 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:02:17,479 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:03:17,481 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:04:17,481 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:04:19,338 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 15:04:34,678 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 15:05:17,482 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:06:17,482 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:07:17,482 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:08:17,483 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:09:17,484 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:09:19,523 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 15:09:34,842 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 15:10:17,485 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:11:17,485 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:12:17,486 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:13:17,486 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:14:17,487 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:14:19,726 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 15:14:35,046 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 15:15:17,487 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:16:17,488 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:17:17,488 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:18:17,489 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:19:17,489 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:19:19,930 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 15:19:35,215 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 15:20:17,490 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:21:17,490 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:22:17,491 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:23:17,491 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:24:17,492 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:24:20,134 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] zookeeper_1 | 2022-04-21 15:24:24,965 - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@139] - Purge task started. zookeeper_1 | 2022-04-21 15:24:24,966 - INFO [PurgeTask:FileTxnSnapLog@124] - zookeeper.snapshot.trust.empty : false zookeeper_1 | 2022-04-21 15:24:24,971 - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@145] - Purge task completed. connect_1 | 2022-04-21 15:24:32,172 INFO || [Producer clientId=producer-3] Resetting the last seen epoch of partition my_connect_configs-0 to 0 since the associated topicId changed from null to fML0POG8THqTjJyfWIg5aQ [org.apache.kafka.clients.Metadata] connect_1 | 2022-04-21 15:24:32,193 INFO || [Worker clientId=connect-1, groupId=1] Session key updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] connect_1 | 2022-04-21 15:25:17,492 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:26:17,493 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:27:17,494 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:28:17,498 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:29:17,501 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:29:20,340 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 15:29:35,422 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 15:30:17,502 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:31:17,506 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:32:17,507 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:33:17,508 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:34:17,509 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:34:20,544 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 15:34:35,626 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 15:35:17,514 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:36:17,514 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:37:17,519 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:38:17,520 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:39:17,520 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask] connect_1 | 2022-04-21 15:39:20,754 INFO || [AdminClient clientId=connector-adminclient-kafka-contact-connector-0] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 15:39:35,847 INFO || [AdminClient clientId=adminclient-8] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] connect_1 | 2022-04-21 15:40:17,526 INFO || WorkerSourceTask{id=kafka-contact-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. [org.apache.kafka.connect.runtime.WorkerSourceTask]