Uploaded image for project: 'AMQ Streams'
  1. AMQ Streams
  2. ENTMQST-4171

Cannot unpack libzstd-jni-1.5.0.2-redhat-00003: No space left on device

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Blocker Blocker
    • 2.2.0.GA
    • 2.1.0.GA
    • None
    • None
    • False
    • None
    • False

      The latest Streams image has 5Mi /tmp folder by default:

      $ kubectl exec my-cluster-kafka-1 -- df -h /tmp
      Filesystem      Size  Used Avail Use% Mounted on
      tmpfs           5.0M   52K  5.0M   2% /tmp
      

      I get an unrecoverable error if I switch my producer to zstd compression:

      2022-07-27 15:45:34,418 ERROR [ReplicaManager broker=1] Error processing append operation on partition my-topic-1 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-1]
      org.apache.kafka.common.KafkaException: java.lang.ExceptionInInitializerError: Cannot unpack libzstd-jni-1.5.0.2-redhat-00003: No space left on device
      	at org.apache.kafka.common.compress.ZstdFactory.wrapForInput(ZstdFactory.java:70)
      	at org.apache.kafka.common.record.CompressionType$5.wrapForInput(CompressionType.java:127)
      	at org.apache.kafka.common.record.DefaultRecordBatch.recordInputStream(DefaultRecordBatch.java:279)
      	at org.apache.kafka.common.record.DefaultRecordBatch.compressedIterator(DefaultRecordBatch.java:283)
      	at org.apache.kafka.common.record.DefaultRecordBatch.skipKeyValueIterator(DefaultRecordBatch.java:361)
      	at kafka.log.LogValidator$.$anonfun$validateMessagesAndAssignOffsetsCompressed$1(LogValidator.scala:414)
      	at java.base/java.lang.Iterable.forEach(Iterable.java:75)
      	at kafka.log.LogValidator$.validateMessagesAndAssignOffsetsCompressed(LogValidator.scala:407)
      	at kafka.log.LogValidator$.validateMessagesAndAssignOffsets(LogValidator.scala:112)
      	at kafka.log.UnifiedLog.append(UnifiedLog.scala:802)
      	at kafka.log.UnifiedLog.appendAsLeader(UnifiedLog.scala:718)
      	at kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:1057)
      	at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:1045)
      	at kafka.server.ReplicaManager.$anonfun$appendToLocalLog$6(ReplicaManager.scala:924)
      	at scala.collection.StrictOptimizedMapOps.map(StrictOptimizedMapOps.scala:28)
      	at scala.collection.StrictOptimizedMapOps.map$(StrictOptimizedMapOps.scala:27)
      	at scala.collection.mutable.HashMap.map(HashMap.scala:35)
      	at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:912)
      	at kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:583)
      	at kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:658)
      	at kafka.server.KafkaApis.handle(KafkaApis.scala:169)
      	at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:75)
      	at java.base/java.lang.Thread.run(Thread.java:829)
      Caused by: java.lang.ExceptionInInitializerError: Cannot unpack libzstd-jni-1.5.0.2-redhat-00003: No space left on device
      	at java.base/java.io.FileOutputStream.writeBytes(Native Method)
      	at java.base/java.io.FileOutputStream.write(FileOutputStream.java:354)
      	at com.github.luben.zstd.util.Native.load(Native.java:110)
      	at com.github.luben.zstd.util.Native.load(Native.java:55)
      	at com.github.luben.zstd.ZstdInputStreamNoFinalizer.<clinit>(ZstdInputStreamNoFinalizer.java:23)
      	at org.apache.kafka.common.compress.ZstdFactory.wrapForInput(ZstdFactory.java:67)
      	... 22 more
      

      Workaround: raise /tmp size to 10Mi.

      spec:
        kafka:
          template:
            pod:
              tmpDirSizeLimit: 10Mi
      

            kliberti Kyle Liberti
            rhn-support-fvaleri Federico Valeri
            Maros Orsak Maros Orsak
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: