-
Enhancement
-
Resolution: Obsolete
-
Major
-
None
-
None
Essentially, it is about supporting eventual consistency in Infinispan. Currently Infinispan is strongly consistent when using synchronous distribution mode. Each data owner receives updates synchronously so anyone anywhere on the cluster doing a GET will see the correct value. The only exception is during a rehash (when a new node joins or leaves), that consistency is eventual since the GET may reach a new joiner who may not have applied state it receives from its neighbours yet. However this is hidden from users since the GET is sent to> 1 data owner and if an UnsureResponse is received (determined by the fact that a new joiner responds and the new joiner wouldn't have finished applying state), the caller thread waits for more definite responses.
However, there is a use case for being eventually consistent as well: the main benefits being speed and partition tolerance. E.g., if we use distribution in asynchronous mode, the writes become much faster. However, anyone anywhere doing a GET will have to perform the GET on all data owners, and compare the versions of the data received to determine which is the latest. And if there is a conflict, to pass back all values to the user.
So in terms of design, what I have in mind is:
- All cache entries are versioned using vector clocks. One vector clock per node.
- When a node performs a GET, the GET is sent to all data owners (concurrently), and the value + version is retrieved from each.
- If the versions are all the same (or they can be "fast forwarded"), the value is returned
- Otherwise, all potential values and their versions are returned
- A resolve() API should be provided where application code may provide a "hint" as to which version should be "correct" - which will cause an update.
- In terms of implementation, this will touch the DistributionInterceptor, InternalCacheEntry and relevant factories, some config code (since this should be consistency model should be configurable), and a new public interface.