-
Bug
-
Resolution: Unresolved
-
Major
-
2.5.0.Final
-
False
-
None
-
False
-
Critical
In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.
Bug report
For bug reports, provide this information, please:
What Debezium connector do you use and what version?
I am using debezium server with Postgres connector. Debezium version 2.5.0
What is the connector configuration?
debezium.source.connector.class=io.debezium.connector.postgresql.PostgresConnector debezium.source.offset.storage=io.debezium.storage.redis.offset.RedisOffsetBackingStore debezium.source.offset.storage.redis.address=${REDIS_HOST:localhost:6379} #debezium.source.offset.storage.redis.password=${REDIS_PASSWORD:} debezium.source.offset.storage.redis.key=${REDIS_OFFSET_KEY:metadata:debezium:offsets} debezium.source.schema.history.internal=io.debezium.storage.redis.history.RedisSchemaHistory debezium.source.schema.history.internal.redis.address=${REDIS_HOST:localhost:6379} #debezium.source.schema.history.internal.redis.password=${REDIS_PASSWORD:} debezium.source.schema.history.internal.redis.key=${REDIS_HISTORY_KEY:metadata:debezium:schema_history} debezium.source.offset.flush.interval.ms=0 debezium.source.database.hostname=${DB_HOST:localhost} debezium.source.database.port=${DB_PORT:7432} debezium.source.database.user=${DB_USER:user} debezium.source.database.password=${DB_PASSWORD:pass} debezium.source.database.dbname=${DB_NAME:db} debezium.source.table.include.list=${DB_SCHEMA:public}.${DB_TABLE:events} debezium.source.schema.include.list=${DB_SCHEMA:public} debezium.source.database.sslmode=${DB_SSLMODE:prefer} debezium.source.slot.retry.delay.ms=${DB_SLOT_RETRY_MS:30000} debezium.source.slot.max.retries=${DB_SLOT_MAX_RETRIES:10} # debezium.source.plugin.name=pgoutput debezium.source.publication.autocreate.mode=disabled debezium.source.publication.name=${DB_PUBLICATION:events_publication} debezium.source.slot.name=debezium debezium.source.key.converter=org.apache.kafka.connect.json.JsonConverter debezium.source.key.converter.schemas.enable=false debezium.source.value.converter=org.apache.kafka.connect.json.JsonConverter debezium.source.value.converter.schemas.enable=false debezium.source.topic.prefix=${SERVICE_ENV_NAME:cm} debezium.source.topic.delimiter=${TOPIC_DELIMITER:_} debezium.source.heartbeat.interval.ms=${HEARTBEAT_INTERVAL_MS:1000} debezium.source.topic.heartbeat.prefix=${HEARTBEAT_PREFIX:heartbeat} debezium.source.heartbeat.action.query=${HEARTBEAT_ACTION_QUERY:INSERT INTO test_heartbeat_table (text) VALUES ('test_heartbeat')} debezium.sink.type=kafka debezium.sink.kafka.topic=events debezium.sink.kafka.producer.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS:localhost:9096} debezium.sink.kafka.producer.key.serializer=org.apache.kafka.common.serialization.StringSerializer debezium.sink.kafka.producer.value.serializer=org.apache.kafka.common.serialization.StringSerializer debezium.transforms=outbox debezium.transforms.outbox.route.topic.replacement=${OUTBOX_TOPIC:events} debezium.transforms.outbox.type=io.debezium.transforms.outbox.EventRouter debezium.transforms.outbox.table.expand.json.payload=true debezium.transforms.outbox.route.by.field=subject debezium.transforms.outbox.table.field.event.key=subject debezium.transforms.outbox.table.fields.additional.placement=subject:header:subject,subject_id:header:subjectId,type:header:type # ############ SET LOG LEVELS ############ quarkus.http.port=8080 quarkus.log.level=INFO quarkus.log.console.json=true # Ignore messages below warning level from Jetty, because it's a bit verbose quarkus.log.category."org.eclipse.jetty".level=WARN debezium.source.offset.storage.redis.password=${REDIS_PASSWORD:} debezium.source.offset.storage.redis.ssl.enabled=true debezium.source.schema.history.internal.redis.password=${REDIS_PASSWORD:} debezium.source.schema.history.internal.redis.ssl.enabled=true
What is the captured database version and mode of depoyment?
(E.g. on-premises, with a specific cloud provider, etc.)
Postgres 11, Azure Managed.
What behaviour do you expect?
No exception should be thrown
What behaviour do you see?
Nullpointer exception when storing redis offsets
Do you see the same behaviour using the latest relesead Debezium version?
(Ideally, also verify with latest Alpha/Beta/CR version)
Yes
Do you have the connector logs, ideally from start till finish?
(You might be asked later to provide DEBUG/TRACE level log)
2024-05-16 12:17:18,953 WARN [io.deb.sto.red.off.RedisOffsetBackingStore] (RedisOffsetBackingStore-1) Writing to Redis offset store failed with io.debezium.storage.redis.RedisClientConnectionException: redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out
2024-05-16 12:17:18,953 WARN [io.deb.sto.red.off.RedisOffsetBackingStore] (RedisOffsetBackingStore-1) Will retry
2024-05-16 12:17:18,953 WARN [io.deb.sto.red.off.RedisOffsetBackingStore] (RedisOffsetBackingStore-1) Attempting to reconnect to Redis
2024-05-16 12:17:21,951 ERROR [io.deb.emb.EmbeddedEngine] (pool-12-thread-1) Timed out waiting to flush EmbeddedEngine{id=kafka} offsets to storage
2024-05-16 12:17:22,381 WARN [io.deb.sto.red.off.RedisOffsetBackingStore] (executor-thread-5802) Writing to Redis offset store failed with java.lang.NullPointerException
2024-05-16 12:17:22,381 WARN [io.deb.sto.red.off.RedisOffsetBackingStore] (executor-thread-5802) Will retry
2024-05-16 12:17:23,645 WARN [io.deb.sto.red.off.RedisOffsetBackingStore] (executor-thread-5802) Writing to Redis offset store failed with java.lang.NullPointerException
2024-05-16 12:17:23,646 WARN [io.deb.sto.red.off.RedisOffsetBackingStore] (executor-thread-5802) Will retry
How to reproduce the issue using our tutorial deployment?
Redis timeout occured when storing offset and then it got stuck in a crash loop of NullPointerException.
Feature request or enhancement
For feature requests or enhancements, provide this information, please:
Which use case/requirement will be addressed by the proposed feature?
Storing debezium offsets to redis store
Implementation ideas (optional)
<Your answer>