Message batching is good on the sender side, as the cost of sending N messages is amortized over a batch (of N).
On the receiver side, we process regular messages (from the same sender) sequentially. This is fine, as regular messages need to be delivered in strict send-order.
However, when we send OOB messages, they can be delivered in parallel. This is done for single messages, but OOB message batches are still delivered as a batch, and most applications probably deliver the contents one-by-one.
This is not good for latency, as the N-th mesage in a batch has to wait for (N-1)* delivery-time until it gets processed.
In environments where we have large thread pools, or with virtual threads, we could simply 'unbatch' OOB messages batches and deliver each message on a separate thread. This would eliminate the above waiting time and reduce latency.
Create a new MessageProcessingPolicy impl (UnvatchOOBBatches), which unbatches OOB batches into single OOB messages, each delivered by a separate thread.