-
Feature Request
-
Resolution: Done
-
Major
-
None
-
None
-
False
-
False
-
Batch messages on the sender side in a protocol layer to increase throughput.
Design: Messages are buffered and then packed into a wrapper message when enough messages arrive or a flush timeout is reached. Protocols further down the stack only see the wrapper message, reducing the amount of processing. On the receiver end, the protocol unwraps the messages and passes them up one at a time FIFO to maintain ordering within the batch.
This has three advantages:
1) Only one header per batch for protocols further down the stack, reducing data overhead for small messages.
2) Reduced processing for protocols further down the stack.
3) Reduced work for TP bundlers, which seems to be a bottleneck as large throughput gains were found even when the early batcher was at the bottom of the stack.
MPerf tests were run on a 4 node cluster with 4 senders, 100 threads per sender, 100 byte messages. Throughput for the three runs shows the early batching increased throughput significantly.
tcp | 47.6 | 53.27 | 46.28 |
tcp-eb | 200.19 | 199.27 | 201.48 |
The difference reduced as message size increased, reaching near parity around 1kb messages.
- is cloned by
-
JGRP-2591 BATCH: productize
- Open