-
Bug
-
Resolution: Done
-
Critical
-
4.1.0.BETA1
-
None
-
-
Medium
I'm evaluating some of the clustering possibilities with Infinispan and I might have found a bug introduced somewhere in 4.1.x (no problems occur when repeating the same procedures with 4.0.0.Final).
I'll attach a simple maven test case where I isolated the freezing conditions.
Basically, after starting a test in debug mode twice with a breakpoint after trying to get a cache that doesn't exist on the first VM, the second freezes with the following stack (more details on the attached InfinispanTest class):
Thread [main] (Suspended)
Unsafe.park(boolean, long) line: not available [native method]
LockSupport.park(Object) line: 158
FutureTask$Sync(AbstractQueuedSynchronizer).parkAndCheckInterrupt() line: 747
FutureTask$Sync(AbstractQueuedSynchronizer).doAcquireSharedInterruptibly(int) line: 905
FutureTask$Sync(AbstractQueuedSynchronizer).acquireSharedInterruptibly(int) line: 1217
FutureTask$Sync.innerGet() line: 218
FutureTask<V>.get() line: 83
DistributionManagerImpl.waitForJoinToComplete() line: 144
NativeMethodAccessorImpl.invoke0(Method, Object, Object[]) line: not available [native method]
NativeMethodAccessorImpl.invoke(Object, Object[]) line: 39
DelegatingMethodAccessorImpl.invoke(Object, Object[]) line: 25
Method.invoke(Object, Object...) line: 597
ReflectionUtil.invokeAccessibly(Object, Method, Object[]) line: 170
AbstractComponentRegistry$PrioritizedMethod.invoke() line: 852
ComponentRegistry(AbstractComponentRegistry).internalStart() line: 672
ComponentRegistry(AbstractComponentRegistry).start() line: 574
ComponentRegistry.start() line: 148
CacheDelegate<K,V>.start() line: 291
DefaultCacheManager.createCache(String) line: 446
DefaultCacheManager.getCache(String) line: 409
InfinispanTest.test() line: 61
NativeMethodAccessorImpl.invoke0(Method, Object, Object[]) line: not available [native method]
NativeMethodAccessorImpl.invoke(Object, Object[]) line: 39
DelegatingMethodAccessorImpl.invoke(Object, Object[]) line: 25
Method.invoke(Object, Object...) line: 597
FrameworkMethod$1.runReflectiveCall() line: 44
FrameworkMethod$1(ReflectiveCallable).run() line: 15
FrameworkMethod.invokeExplosively(Object, Object...) line: 41
InvokeMethod.evaluate() line: 20
BlockJUnit4ClassRunner.runChild(FrameworkMethod, RunNotifier) line: 76
BlockJUnit4ClassRunner.runChild(Object, RunNotifier) line: 50
ParentRunner$3.run() line: 193
ParentRunner$1.schedule(Runnable) line: 52
BlockJUnit4ClassRunner(ParentRunner<T>).runChildren(RunNotifier) line: 191
ParentRunner<T>.access$000(ParentRunner, RunNotifier) line: 42
ParentRunner$2.evaluate() line: 184
RunBefores.evaluate() line: 28
BlockJUnit4ClassRunner(ParentRunner<T>).run(RunNotifier) line: 236
JUnit4TestClassReference(JUnit4TestReference).run(TestExecution) line: 46
TestExecution.run(ITestReference[]) line: 38
RemoteTestRunner.runTests(String[], String, TestExecution) line: 467
RemoteTestRunner.runTests(TestExecution) line: 683
RemoteTestRunner.run() line: 390
RemoteTestRunner.main(String[]) line: 197
Probably there is a simpler, more elegant way to test this condition (one that could be used in a official test case) but this was the fastest way for me to reproduce the bug without going too deep in the Infinispan internals.
- is related to
-
ISPN-434 Creating a new distributed cache on a non-coordinator node causes rehashing to hang
- Resolved