A wants to execute PutMapCommand with many keys - let's assume that in fact the keys span all nodes in the cluster.
1. A locks all local keys and sends via unicast a message to each primary owner of some of the keys in the map
2. A sends unicast message to each node, requesting the operation
3. Each node locks its keys and sends multicast message to ALL other nodes in the cluster
This happens N - 1 times:
4. Each node receives the multicast message, (updates the non-primary segments) and sends reply back to the sender of mcast message.
5. The primary owners send confirmation back to A.
Let's compute how many messages are here received - it's
N - 1 // A's request
(N - 1) * (N - 1) // multicast message received
(N - 1) * (N - 1) // reply to the multicast message received
N - 1 // response to A
That's 2*N^2 - 2*N messages. In case nobody needs flow control replenishments, nothing is lost etc. That ^2 exponent does not look like the cluster is really scaling.
Could the requestor orchestrate the whole operation? The idea is that all messages are sent only between requestor and other nodes, never between the other nodes. The requestor would lock the primary keys by one set of messages (waiting for reply), updating the non-primaries by another set of messages and then again unlocking all primaries by last message.
The set of messages could be either unicast with selected keys only for the recipient, or multicast with whole map - rationalization which one is actually better is subject to performance test.
This results in 6*N - 6 messages (or 5*N - 5 if the last message wouldn't require the reply). You can easily see when 5*(N - 1) is better than 2*N*(N - 1).
Or is this too similar to transactions with multiple keys?
I think that with current implementation, the putAll operation should be discouraged as it does not provide better performance than multiple put (and in terms of atomicity it's probably not much better either).