Uploaded image for project: 'AMQ Streams'
  1. AMQ Streams
  2. ENTMQST-3826

The /tmp volume is not big enough for the compression libraries

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • 2.1.0.GA
    • None
    • None
    • None
    • False
    • False

      When compression is used to compress / uncompress the messages, Kafka relies on compression libraries which make use of natively compiled binaries. It turns out these libraries are unpacked into the /tmp directory. And when using something like Zstd which is over 1MB in size, it does not fit into the available space and it fails:

      2022-02-25 20:07:52,875 ERROR [ReplicaManager broker=0] Error processing append operation on partition my-source-cluster.my-topic-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-1]
      org.apache.kafka.common.KafkaException: java.lang.ExceptionInInitializerError: Cannot unpack libzstd-jni-1.5.0-4: No space left on device
      at org.apache.kafka.common.compress.ZstdFactory.wrapForOutput(ZstdFactory.java:45)
      at org.apache.kafka.common.record.CompressionType$5.wrapForOutput(CompressionType.java:122)
      at org.apache.kafka.common.record.MemoryRecordsBuilder.<init>(MemoryRecordsBuilder.java:140)
      at org.apache.kafka.common.record.MemoryRecordsBuilder.<init>(MemoryRecordsBuilder.java:160)
      at org.apache.kafka.common.record.MemoryRecordsBuilder.<init>(MemoryRecordsBuilder.java:198)
      at org.apache.kafka.common.record.MemoryRecords.builder(MemoryRecords.java:593)
      at kafka.log.LogValidator$.buildRecordsAndAssignOffsets(LogValidator.scala:513)
      at kafka.log.LogValidator$.validateMessagesAndAssignOffsetsCompressed(LogValidator.scala:466)
      at kafka.log.LogValidator$.validateMessagesAndAssignOffsets(LogValidator.scala:112)
      at kafka.log.UnifiedLog.append(UnifiedLog.scala:802)
      at kafka.log.UnifiedLog.appendAsLeader(UnifiedLog.scala:718)
      at kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:1057)
      at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:1045)
      at kafka.server.ReplicaManager.$anonfun$appendToLocalLog$6(ReplicaManager.scala:924)
      at scala.collection.StrictOptimizedMapOps.map(StrictOptimizedMapOps.scala:28)
      at scala.collection.StrictOptimizedMapOps.map$(StrictOptimizedMapOps.scala:27)
      at scala.collection.mutable.HashMap.map(HashMap.scala:35)
      at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:912)
      at kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:583)
      at kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:658)
      at kafka.server.KafkaApis.handle(KafkaApis.scala:169)
      at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:75)
      at java.base/java.lang.Thread.run(Thread.java:829)
      Caused by: java.lang.ExceptionInInitializerError: Cannot unpack libzstd-jni-1.5.0-4: No space left on device
      at java.base/java.io.FileOutputStream.writeBytes(Native Method)
      at java.base/java.io.FileOutputStream.write(FileOutputStream.java:354)
      at com.github.luben.zstd.util.Native.load(Native.java:110)
      at com.github.luben.zstd.util.Native.load(Native.java:55)
      at com.github.luben.zstd.ZstdOutputStreamNoFinalizer.<clinit>(ZstdOutputStreamNoFinalizer.java:18)
      at org.apache.kafka.common.compress.ZstdFactory.wrapForOutput(ZstdFactory.java:43)
      ... 22 more

      Users can increase the size in templates, but this sounds like something what should work out of the box. 

            Unassigned Unassigned
            scholzj JAkub Scholz
            Lukas Kral Lukas Kral
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: