-
Bug
-
Resolution: Done
-
Critical
-
4.2.1.FINAL
-
None
I have a test case that have 30,000 entries in cache in dist mode with number of owner set to 2 and L1 is disabled. A new node join the cluster, then the total number of entries in all cache is larger than 60,000. Even worse, after I execute cache.clear() on all caches in the cluster, there are still lots of entries left.
The attached unit test can reproduce this issue.
- is duplicated by
-
ISPN-962 Entries not committed w/ DistLockingInterceptor and L1 caching disabled.
- Resolved