Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-5911

Debezium Server stops with NPE when Redis does not report the "maxmemory" field in "info memory" command

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • 2.1.0.Beta1
    • 2.1.0.Alpha2
    • debezium-server
    • None
    • False
    • None
    • False
    • Important

      What is the connector configuration?

      using redis sink with Redis Enterprise

      What behaviour do you expect?

      A clear message to understand the problem.

      What behaviour do you see?

      just a NPE stack trace

      2022-12-07 16:54:50,446 ERROR [io.deb.ser.ConnectorLifecycle] (pool-7-thread-1) Connector completed: success = 'false', message = 'Stopping connector after error in the application's handler method: Cannot parse null string', error = 'java.lang.NumberFormatException: Cannot parse null string': java.lang.NumberFormatException: Cannot parse null string
      at java.base/java.lang.Long.parseLong(Long.java:674)
      at java.base/java.lang.Long.parseLong(Long.java:836)
      at io.debezium.server.redis.RedisStreamChangeConsumer.isMemoryOk(RedisStreamChangeConsumer.java:247)
      at io.debezium.server.redis.RedisStreamChangeConsumer.lambda$connect$1(RedisStreamChangeConsumer.java:97)
      at io.debezium.server.redis.RedisStreamChangeConsumer.canHandleBatch(RedisStreamChangeConsumer.java:229)
      at io.debezium.server.redis.RedisStreamChangeConsumer.lambda$handleBatch$4(RedisStreamChangeConsumer.java:162)
      at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
      at java.base/java.util.stream.IntPipeline$1$1.accept(IntPipeline.java:180)
      at java.base/java.util.stream.Streams$RangeIntSpliterator.forEachRemaining(Streams.java:104)
      at java.base/java.util.Spliterator$OfInt.forEachRemaining(Spliterator.java:711)
      at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
      at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
      at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
      at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
      at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
      at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
      at io.debezium.server.redis.RedisStreamChangeConsumer.handleBatch(RedisStreamChangeConsumer.java:142)
      at io.debezium.embedded.ConvertingEngineBuilder.lambda$notifying$2(ConvertingEngineBuilder.java:86)
      at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:913)
      at io.debezium.embedded.ConvertingEngineBuilder$2.run(ConvertingEngineBuilder.java:195)
      at io.debezium.server.DebeziumServer.lambda$start$1(DebeziumServer.java:151)
      at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
      at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
      at java.base/java.lang.Thread.run(Thread.java:833)
      

      Do you see the same behaviour using the latest relesead Debezium version?

      (Ideally, also verify with latest Alpha/Beta/CR version)

      Yes

      Implementation ideas (optional)

      The issue is that Redis Enterprise does not report 'maxmemory' in 'info memory' command.

      In order to keep the threshold percentage mechanism a second property memory.limit.mb (in megabytes) could be defined, which should be used in case ‘info memory’ command does not contain the ‘maxmemory’ field or if ‘maxmemory’ field has value 0 (unlimited). Basically the threshold percentage would apply to memory.limit.mb in case maxmemory is null or 0.

            Unassigned Unassigned
            ggaborg Gabor Andras (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: