Uploaded image for project: 'JGroups'
  1. JGroups
  2. JGRP-2580

BATCH: batching of messages on the send side

XMLWordPrintable

    • Icon: Feature Request Feature Request
    • Resolution: Done
    • Icon: Major Major
    • 5.2
    • None
    • None

      Batch messages on the sender side in a protocol layer to increase throughput.

      Design: Messages are buffered and then packed into a wrapper message when enough messages arrive or a flush timeout is reached.  Protocols further down the stack only see the wrapper message, reducing the amount of processing.  On the receiver end, the protocol unwraps the messages and passes them up one at a time FIFO to maintain ordering within the batch.

      This has three advantages:

      1) Only one header per batch for protocols further down the stack, reducing data overhead for small messages.

      2) Reduced processing for protocols further down the stack.

      3) Reduced work for TP bundlers, which seems to be a bottleneck as large throughput gains were found even when the early batcher was at the bottom of the stack.

       

      MPerf tests were run on a 4 node cluster with 4 senders, 100 threads per sender, 100 byte messages.  Throughput for the three runs shows the early batching increased throughput significantly.

      tcp 47.6 53.27 46.28
      tcp-eb 200.19 199.27 201.48

      The difference reduced as message size increased, reaching near parity around 1kb messages.

              rhn-engineering-bban Bela Ban
              cjljohnson Chris Johnson (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: