[INFO] Scanning for projects... [WARNING] [WARNING] Some problems were encountered while building the effective model for org.infinispan:infinispan-spring-boot-it:jar:9.0.0-SNAPSHOT [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must be unique: ${project.groupId}:infinispan-server-hotrod:jar -> version (?) vs ${project.version} @ org.infinispan:infinispan-spring-boot-it:[unknown-version], /home/rvansa/workspace/ispn/infinispan/integrationtests/spring-boot-it/pom.xml, line 94, column 21 [WARNING] [WARNING] It is highly recommended to fix these problems because they threaten the stability of your build. [WARNING] [WARNING] For this reason, future Maven versions might no longer support building such malformed projects. [WARNING] [INFO] Inspecting build with total of 1 modules... [INFO] Installing Nexus Staging features: [INFO] ... total of 1 executions of maven-deploy-plugin replaced with nexus-staging-maven-plugin [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building Infinispan Core 9.0.0-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-surefire-plugin:2.18.1:test (default-cli) @ infinispan-core --- [INFO] Surefire report directory: /home/rvansa/workspace/ispn/infinispan/core/target/surefire-reports ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.infinispan.tx.InfinispanNodeFailureTest Configuring TestNG with: TestNG652Configurator 14:17:29,492 DEBUG [org.jboss.logging] (main) Logging Provider: org.jboss.logging.Log4j2LoggerProvider 14:17:29,634 DEBUG [org.infinispan.commons.test.TestNGTestListener] (TestNG) Before setup testClassStarted 14:17:29,645 DEBUG [org.infinispan.commons.test.TestNGTestListener] (testng-InfinispanNodeFailureTest) After setup testClassStarted 14:17:29,645 DEBUG [org.infinispan.commons.test.TestNGTestListener] (testng-InfinispanNodeFailureTest) Before setup createBeforeClass 14:17:29,646 DEBUG [org.infinispan.tx.InfinispanNodeFailureTest] (testng-InfinispanNodeFailureTest) Creating cache managers Transaction manager used: JBossTM Transport protocol stack used = tcp 14:17:29,756 DEBUG [org.infinispan.commons.util.ServiceFinder] (testng-InfinispanNodeFailureTest) No service impls found: ModuleLifecycle 14:17:29,758 DEBUG [org.infinispan.commons.util.ServiceFinder] (testng-InfinispanNodeFailureTest) Loading service impl: CoreTestMetadataFileFinder 14:17:29,798 DEBUG [org.infinispan.commons.util.ServiceFinder] (testng-InfinispanNodeFailureTest) Loading service impl: TestModuleCommandExtensions 14:17:29,799 DEBUG [org.infinispan.util.ModuleProperties] (testng-InfinispanNodeFailureTest) Loading module command extension SPI class: org.infinispan.remoting.rpc.TestModuleCommandExtensions@52fb8945 14:17:29,824 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String cancelPushState 14:17:29,824 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String takeSiteOffline 14:17:29,824 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String pushState 14:17:29,824 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String bringSiteOnline 14:17:29,824 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute coordinatorAddress [r=true,w=false,is=false,type=java.lang.String] 14:17:29,825 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheManagerStatus [r=true,w=false,is=false,type=java.lang.String] 14:17:29,825 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute runningCacheCount [r=true,w=false,is=false,type=java.lang.String] 14:17:29,825 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute globalConfigurationAsProperties [r=true,w=false,is=false,type=java.util.Properties] 14:17:29,825 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute name [r=true,w=false,is=false,type=java.lang.String] 14:17:29,825 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute definedCacheNames [r=true,w=false,is=false,type=java.lang.String] 14:17:29,825 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterMembers [r=true,w=false,is=false,type=java.lang.String] 14:17:29,825 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute coordinator [r=true,w=false,is=true,type=boolean] 14:17:29,825 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute nodeAddress [r=true,w=false,is=false,type=java.lang.String] 14:17:29,826 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute definedCacheConfigurationNames [r=true,w=false,is=false,type=java.lang.String] 14:17:29,826 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute definedCacheCount [r=true,w=false,is=false,type=java.lang.String] 14:17:29,826 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute version [r=true,w=false,is=false,type=java.lang.String] 14:17:29,826 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterName [r=true,w=false,is=false,type=java.lang.String] 14:17:29,826 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute createdCacheCount [r=true,w=false,is=false,type=java.lang.String] 14:17:29,826 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterSize [r=true,w=false,is=false,type=int] 14:17:29,826 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute physicalAddresses [r=true,w=false,is=false,type=java.lang.String] 14:17:29,826 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void startCache 14:17:29,826 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void startCache 14:17:29,827 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rebalancingEnabled [r=true,w=true,is=true,type=boolean] 14:17:29,827 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterAvailability [r=true,w=false,is=false,type=java.lang.String] 14:17:29,827 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stores [r=true,w=false,is=false,type=long] 14:17:29,827 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReadTime [r=true,w=false,is=false,type=long] 14:17:29,827 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hits [r=true,w=false,is=false,type=long] 14:17:29,827 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageWriteTime [r=true,w=false,is=false,type=long] 14:17:29,827 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hitRatio [r=true,w=false,is=false,type=double] 14:17:29,827 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeHits [r=true,w=false,is=false,type=long] 14:17:29,827 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeMisses [r=true,w=false,is=false,type=long] 14:17:29,828 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute readWriteRatio [r=true,w=false,is=false,type=double] 14:17:29,828 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfEntries [r=true,w=false,is=false,type=int] 14:17:29,828 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute misses [r=true,w=false,is=false,type=long] 14:17:29,828 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:29,828 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceReset [r=true,w=false,is=false,type=long] 14:17:29,828 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageRemoveTime [r=true,w=false,is=false,type=long] 14:17:29,828 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictions [r=true,w=false,is=false,type=long] 14:17:29,828 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@19740d3b under infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=CacheManager,name="DefaultCacheManager",component=GlobalXSiteAdminOperations 14:17:29,829 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@136c1de3 under infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=CacheManager,name="DefaultCacheManager",component=CacheManager 14:17:29,829 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@615cac36 under infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=CacheManager,name="DefaultCacheManager",component=LocalTopologyManager 14:17:29,829 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@3674436a under infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=CacheManager,name="DefaultCacheManager",component=CacheContainerStats 14:17:29,920 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) ISPN000078: Starting JGroups channel ISPN 14:17:30,049 DEBUG [org.jgroups.stack.Configurator] (testng-InfinispanNodeFailureTest) set property TCP_NIO2.diagnostics_addr to default value /224.0.75.75 14:17:30,055 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (testng-InfinispanNodeFailureTest) null: set max_xmit_req_size from 0 to 247600 14:17:30,055 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) null: set max_xmit_req_size from 0 to 247600 14:17:30,062 TRACE [org.jgroups.blocks.MessageDispatcher] (testng-InfinispanNodeFailureTest) setting local_addr (null) to InfinispanNodeFailureTest-NodeA-7443 14:17:30,063 TRACE [org.jgroups.protocols.pbcast.STABLE] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: stable task started 14:17:30,064 TRACE [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Discovery.stopped=false 14:17:30,071 DEBUG [org.jgroups.protocols.pbcast.GMS] (testng-InfinispanNodeFailureTest) address=InfinispanNodeFailureTest-NodeA-7443, cluster=ISPN, physical address=127.0.0.1:7900 14:17:30,071 TRACE [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Added discovery for InfinispanNodeFailureTest-NodeA-7443. Registered discoveries: {InfinispanNodeFailureTest-NodeA-7443=TEST_PING@InfinispanNodeFailureTest-NodeA-7443} 14:17:30,072 DEBUG [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) No other nodes yet, marking this node as coord 14:17:30,072 TRACE [org.jgroups.protocols.pbcast.GMS] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: no members discovered after 1 ms: creating cluster as first member 14:17:30,073 DEBUG [org.jgroups.protocols.pbcast.NAKACK2] (testng-InfinispanNodeFailureTest) [InfinispanNodeFailureTest-NodeA-7443 setDigest()] existing digest: [] new digest: InfinispanNodeFailureTest-NodeA-7443: [0 (0)] resulting digest: InfinispanNodeFailureTest-NodeA-7443: [0 (0)] 14:17:30,073 DEBUG [org.jgroups.protocols.pbcast.GMS] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: installing view [InfinispanNodeFailureTest-NodeA-7443|0] (1) [InfinispanNodeFailureTest-NodeA-7443] 14:17:30,073 DEBUG [org.jgroups.protocols.pbcast.STABLE] (testng-InfinispanNodeFailureTest) resuming message garbage collection 14:17:30,074 TRACE [org.jgroups.protocols.pbcast.STABLE] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: reset digest to InfinispanNodeFailureTest-NodeA-7443: [-1] 14:17:30,074 TRACE [org.jgroups.protocols.tom.TOA] (testng-InfinispanNodeFailureTest) Handle view [InfinispanNodeFailureTest-NodeA-7443|0] (1) [InfinispanNodeFailureTest-NodeA-7443] 14:17:30,074 TRACE [org.jgroups.protocols.MFC] (testng-InfinispanNodeFailureTest) new membership: [InfinispanNodeFailureTest-NodeA-7443] 14:17:30,077 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (testng-InfinispanNodeFailureTest) Added a new task directly: 0 task(s) are waiting 14:17:30,078 DEBUG [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) New view accepted: [InfinispanNodeFailureTest-NodeA-7443|0] (1) [InfinispanNodeFailureTest-NodeA-7443] 14:17:30,079 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) ISPN000094: Received new cluster view for channel ISPN: [InfinispanNodeFailureTest-NodeA-7443|0] (1) [InfinispanNodeFailureTest-NodeA-7443] 14:17:30,083 TRACE [org.jgroups.protocols.pbcast.STABLE] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: reset digest to InfinispanNodeFailureTest-NodeA-7443: [-1] 14:17:30,083 DEBUG [org.jgroups.protocols.pbcast.STABLE] (testng-InfinispanNodeFailureTest) resuming message garbage collection 14:17:30,083 DEBUG [org.jgroups.protocols.pbcast.GMS] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: created cluster (first member). My view is [InfinispanNodeFailureTest-NodeA-7443|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl 14:17:30,083 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) ISPN000079: Channel ISPN local address is InfinispanNodeFailureTest-NodeA-7443, physical addresses are [127.0.0.1:7900] 14:17:30,086 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Starting LocalTopologyManager on InfinispanNodeFailureTest-NodeA-7443 14:17:30,087 INFO [org.infinispan.factories.GlobalComponentRegistry] (testng-InfinispanNodeFailureTest) ISPN000128: Infinispan version: Infinispan 'Ruppaner' 9.0.0-SNAPSHOT 14:17:30,087 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Received new cluster view: 0, isCoordinator = true, old status = INITIALIZING 14:17:30,087 DEBUG [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Recovering cluster status for view 0 14:17:30,089 DEBUG [org.infinispan.manager.DefaultCacheManager] (testng-InfinispanNodeFailureTest) Started cache manager ISPN on InfinispanNodeFailureTest-NodeA 14:17:30,090 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) dests=null, command=CacheTopologyControlCommand{cache=null, type=GET_STATUS, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=0}, mode=SYNCHRONOUS, timeout=24000 14:17:30,090 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Attempting to execute command on self: CacheTopologyControlCommand{cache=null, type=GET_STATUS, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=0} 14:17:30,091 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Waiting on view 0 being accepted 14:17:30,091 DEBUG [org.infinispan.commons.util.ServiceFinder] (testng-InfinispanNodeFailureTest) No service impls found: ModuleLifecycle 14:17:30,091 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Sending cluster status response for view 0 14:17:30,092 DEBUG [org.infinispan.commons.util.ServiceFinder] (testng-InfinispanNodeFailureTest) Loading service impl: CoreTestMetadataFileFinder 14:17:30,092 DEBUG [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Got 1 status responses. members are [InfinispanNodeFailureTest-NodeA-7443] 14:17:30,098 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Updating cluster members for all the caches. New list is [InfinispanNodeFailureTest-NodeA-7443] 14:17:30,098 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) dests=null, command=CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=-1}, mode=SYNCHRONOUS, timeout=240000 14:17:30,100 DEBUG [org.infinispan.commons.util.ServiceFinder] (testng-InfinispanNodeFailureTest) Loading service impl: TestModuleCommandExtensions 14:17:30,100 DEBUG [org.infinispan.util.ModuleProperties] (testng-InfinispanNodeFailureTest) Loading module command extension SPI class: org.infinispan.remoting.rpc.TestModuleCommandExtensions@4c3dbc64 14:17:30,102 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String pushState 14:17:30,102 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String bringSiteOnline 14:17:30,102 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String cancelPushState 14:17:30,102 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String takeSiteOffline 14:17:30,102 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterAvailability [r=true,w=false,is=false,type=java.lang.String] 14:17:30,103 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rebalancingEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,103 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterName [r=true,w=false,is=false,type=java.lang.String] 14:17:30,103 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute definedCacheNames [r=true,w=false,is=false,type=java.lang.String] 14:17:30,103 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute definedCacheCount [r=true,w=false,is=false,type=java.lang.String] 14:17:30,103 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute coordinatorAddress [r=true,w=false,is=false,type=java.lang.String] 14:17:30,103 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterMembers [r=true,w=false,is=false,type=java.lang.String] 14:17:30,103 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute name [r=true,w=false,is=false,type=java.lang.String] 14:17:30,104 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute createdCacheCount [r=true,w=false,is=false,type=java.lang.String] 14:17:30,104 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute globalConfigurationAsProperties [r=true,w=false,is=false,type=java.util.Properties] 14:17:30,104 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheManagerStatus [r=true,w=false,is=false,type=java.lang.String] 14:17:30,104 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterSize [r=true,w=false,is=false,type=int] 14:17:30,104 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute coordinator [r=true,w=false,is=true,type=boolean] 14:17:30,104 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute version [r=true,w=false,is=false,type=java.lang.String] 14:17:30,104 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute runningCacheCount [r=true,w=false,is=false,type=java.lang.String] 14:17:30,104 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute nodeAddress [r=true,w=false,is=false,type=java.lang.String] 14:17:30,104 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute physicalAddresses [r=true,w=false,is=false,type=java.lang.String] 14:17:30,104 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute definedCacheConfigurationNames [r=true,w=false,is=false,type=java.lang.String] 14:17:30,105 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void startCache 14:17:30,105 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void startCache 14:17:30,105 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageWriteTime [r=true,w=false,is=false,type=long] 14:17:30,105 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictions [r=true,w=false,is=false,type=long] 14:17:30,105 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceReset [r=true,w=false,is=false,type=long] 14:17:30,105 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute readWriteRatio [r=true,w=false,is=false,type=double] 14:17:30,105 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeMisses [r=true,w=false,is=false,type=long] 14:17:30,105 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfEntries [r=true,w=false,is=false,type=int] 14:17:30,105 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute misses [r=true,w=false,is=false,type=long] 14:17:30,105 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stores [r=true,w=false,is=false,type=long] 14:17:30,105 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeHits [r=true,w=false,is=false,type=long] 14:17:30,105 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hitRatio [r=true,w=false,is=false,type=double] 14:17:30,106 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hits [r=true,w=false,is=false,type=long] 14:17:30,106 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,106 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReadTime [r=true,w=false,is=false,type=long] 14:17:30,106 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageRemoveTime [r=true,w=false,is=false,type=long] 14:17:30,106 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@1156dd7e under infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=CacheManager,name="DefaultCacheManager",component=GlobalXSiteAdminOperations 14:17:30,106 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@4e4b4408 under infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=CacheManager,name="DefaultCacheManager",component=LocalTopologyManager 14:17:30,106 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@58accf0d under infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=CacheManager,name="DefaultCacheManager",component=CacheManager 14:17:30,106 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@6cbecc9b under infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=CacheManager,name="DefaultCacheManager",component=CacheContainerStats 14:17:30,108 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) ISPN000078: Starting JGroups channel ISPN 14:17:30,114 DEBUG [org.jgroups.stack.Configurator] (testng-InfinispanNodeFailureTest) set property TCP_NIO2.diagnostics_addr to default value /224.0.75.75 14:17:30,115 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (testng-InfinispanNodeFailureTest) null: set max_xmit_req_size from 0 to 247600 14:17:30,115 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) null: set max_xmit_req_size from 0 to 247600 14:17:30,116 TRACE [org.jgroups.blocks.MessageDispatcher] (testng-InfinispanNodeFailureTest) setting local_addr (null) to InfinispanNodeFailureTest-NodeB-62629 14:17:30,116 TRACE [org.jgroups.protocols.pbcast.STABLE] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: stable task started 14:17:30,116 TRACE [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Discovery.stopped=false 14:17:30,117 DEBUG [org.jgroups.protocols.pbcast.GMS] (testng-InfinispanNodeFailureTest) address=InfinispanNodeFailureTest-NodeB-62629, cluster=ISPN, physical address=127.0.0.1:7901 14:17:30,118 TRACE [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Added discovery for InfinispanNodeFailureTest-NodeB-62629. Registered discoveries: {InfinispanNodeFailureTest-NodeA-7443=TEST_PING@InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629=TEST_PING@InfinispanNodeFailureTest-NodeB-62629} 14:17:30,118 TRACE [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Map InfinispanNodeFailureTest-NodeA-7443 with physical address 127.0.0.1:7900 in TEST_PING@InfinispanNodeFailureTest-NodeB-62629 14:17:30,118 TRACE [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Map InfinispanNodeFailureTest-NodeB-62629 with physical address 127.0.0.1:7901 in TEST_PING@InfinispanNodeFailureTest-NodeA-7443 14:17:30,118 TRACE [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Returning ping rsp: InfinispanNodeFailureTest-NodeA-7443, name=InfinispanNodeFailureTest-NodeA-7443, addr=127.0.0.1:7900, coord 14:17:30,119 TRACE [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Skipping sending discovery to self 14:17:30,119 TRACE [org.jgroups.protocols.pbcast.GMS] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: discovery took 1 ms, members: 1 rsps (1 coords) [done] 14:17:30,119 DEBUG [org.jgroups.protocols.pbcast.GMS] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: sending JOIN(InfinispanNodeFailureTest-NodeB-62629) to InfinispanNodeFailureTest-NodeA-7443 14:17:30,120 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: created sender window for InfinispanNodeFailureTest-NodeA-7443 (conn-id=0) 14:17:30,121 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #1, conn_id=0, first) 14:17:30,121 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are GMS: GmsHeader[JOIN_REQ]: mbr=InfinispanNodeFailureTest-NodeB-62629, UNICAST3: DATA, seqno=1, first, TP: [cluster_name=ISPN] 14:17:30,122 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (83 bytes (415.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,124 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (86 bytes) 14:17:30,129 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) 127.0.0.1:7901: connecting to 127.0.0.1:7900 14:17:30,134 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=0 bytes, flags=OOB|INTERNAL], headers are GMS: GmsHeader[JOIN_REQ]: mbr=InfinispanNodeFailureTest-NodeB-62629, UNICAST3: DATA, seqno=1, first, TP: [cluster_name=ISPN] 14:17:30,134 TRACE [org.jgroups.protocols.UNICAST3] (INT-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #1, conn_id=0, first) 14:17:30,135 TRACE [org.jgroups.protocols.UNICAST3] (INT-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: created receiver window for InfinispanNodeFailureTest-NodeB-62629 at seqno=#1 for conn-id=0 14:17:30,135 TRACE [org.jgroups.protocols.UNICAST3] (INT-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#1 14:17:30,136 TRACE [org.jgroups.protocols.pbcast.GMS] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: joiners=[InfinispanNodeFailureTest-NodeB-62629], suspected=[], leaving=[], new view: [InfinispanNodeFailureTest-NodeA-7443|1] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,136 DEBUG [org.jgroups.protocols.pbcast.STABLE] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) suspending message garbage collection 14:17:30,137 DEBUG [org.jgroups.protocols.pbcast.STABLE] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: resume task started, max_suspend_time=33000 14:17:30,137 TRACE [org.jgroups.protocols.pbcast.GMS] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: mcasting view [InfinispanNodeFailureTest-NodeA-7443|1] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] (2 mbrs) 14:17:30,137 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#1 14:17:30,138 TRACE [org.jgroups.protocols.TCP_NIO2] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are GMS: GmsHeader[VIEW], NAKACK2: [MSG, seqno=1], TP: [cluster_name=ISPN] 14:17:30,138 TRACE [org.jgroups.protocols.TCP_NIO2] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: looping back message [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=57 bytes] 14:17:30,138 TRACE [org.jgroups.protocols.TCP_NIO2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=57 bytes], headers are GMS: GmsHeader[VIEW], NAKACK2: [MSG, seqno=1], TP: [cluster_name=ISPN] 14:17:30,138 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (108 bytes (540.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,138 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (111 bytes) 14:17:30,138 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received InfinispanNodeFailureTest-NodeA-7443#1 14:17:30,139 TRACE [org.jgroups.protocols.TCP_NIO2] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=57 bytes], headers are GMS: GmsHeader[VIEW], NAKACK2: [MSG, seqno=1], TP: [cluster_name=ISPN] 14:17:30,139 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: message InfinispanNodeFailureTest-NodeA-7443::1 was added to queue (not yet server) 14:17:30,139 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeA-7443#1-1 (1 messages) 14:17:30,140 TRACE [org.jgroups.protocols.pbcast.GMS] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received full view: [InfinispanNodeFailureTest-NodeA-7443|1] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,140 DEBUG [org.jgroups.protocols.pbcast.GMS] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: installing view [InfinispanNodeFailureTest-NodeA-7443|1] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,140 TRACE [org.jgroups.protocols.pbcast.STABLE] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: reset digest to InfinispanNodeFailureTest-NodeA-7443: [-1], InfinispanNodeFailureTest-NodeB-62629: [-1] 14:17:30,141 TRACE [org.jgroups.protocols.tom.TOA] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) Handle view [InfinispanNodeFailureTest-NodeA-7443|1] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,141 TRACE [org.jgroups.protocols.MFC] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) new membership: [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,141 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,141 DEBUG [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) New view accepted: [InfinispanNodeFailureTest-NodeA-7443|1] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,141 DEBUG [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) Joined: [InfinispanNodeFailureTest-NodeB-62629], Left: [] 14:17:30,141 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) ISPN000094: Received new cluster view for channel ISPN: [InfinispanNodeFailureTest-NodeA-7443|1] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,144 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Received new cluster view: 1, isCoordinator = true, old status = COORDINATOR 14:17:30,144 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Updating cluster members for all the caches. New list is [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,144 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) dests=null, command=CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=-1}, mode=SYNCHRONOUS, timeout=240000 14:17:30,146 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Replication task sending CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=-1} to single recipient InfinispanNodeFailureTest-NodeB-62629 with response mode GET_ALL 14:17:30,148 INFO [org.infinispan.CLUSTER] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) ISPN100000: Node InfinispanNodeFailureTest-NodeB-62629 joined the cluster 14:17:30,149 TRACE [org.jgroups.protocols.UNICAST3] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: created sender window for InfinispanNodeFailureTest-NodeA-7443 (conn-id=0) 14:17:30,149 TRACE [org.jgroups.protocols.UNICAST3] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #1, conn_id=0, first) 14:17:30,150 TRACE [org.jgroups.protocols.TCP_NIO2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeA-7443, headers are GMS: GmsHeader[VIEW_ACK], UNICAST3: DATA, seqno=1, first, TP: [cluster_name=ISPN] 14:17:30,150 TRACE [org.jgroups.protocols.TCP_NIO2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: looping back message [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=0 bytes, flags=OOB|INTERNAL] 14:17:30,150 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=0 bytes, flags=OOB|INTERNAL], headers are GMS: GmsHeader[VIEW_ACK], UNICAST3: DATA, seqno=1, first, TP: [cluster_name=ISPN] 14:17:30,150 TRACE [org.jgroups.protocols.UNICAST3] (INT-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #1, conn_id=0, first) 14:17:30,151 TRACE [org.jgroups.protocols.UNICAST3] (INT-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeA-7443#1 14:17:30,152 TRACE [org.jgroups.protocols.pbcast.GMS] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: got all ACKs (1) from members for view [InfinispanNodeFailureTest-NodeA-7443|1] 14:17:30,152 TRACE [org.jgroups.protocols.UNICAST3] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: created sender window for InfinispanNodeFailureTest-NodeB-62629 (conn-id=1) 14:17:30,152 TRACE [org.jgroups.protocols.UNICAST3] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeB-62629: #1, conn_id=1, first) 14:17:30,152 TRACE [org.jgroups.protocols.TCP_NIO2] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeB-62629, src=InfinispanNodeFailureTest-NodeA-7443, headers are GMS: GmsHeader[JOIN_RSP], UNICAST3: DATA, seqno=1, conn_id=1, first, TP: [cluster_name=ISPN] 14:17:30,153 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (132 bytes (660.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeB-62629] 14:17:30,153 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (135 bytes) 14:17:30,154 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: InfinispanNodeFailureTest-NodeB-62629, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=61 bytes, flags=OOB|INTERNAL], headers are GMS: GmsHeader[JOIN_RSP], UNICAST3: DATA, seqno=1, conn_id=1, first, TP: [cluster_name=ISPN] 14:17:30,154 TRACE [org.jgroups.protocols.UNICAST3] (INT-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #1, conn_id=1, first) 14:17:30,154 TRACE [org.jgroups.protocols.UNICAST3] (INT-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: created receiver window for InfinispanNodeFailureTest-NodeA-7443 at seqno=#1 for conn-id=1 14:17:30,154 TRACE [org.jgroups.protocols.UNICAST3] (INT-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#1 14:17:30,155 DEBUG [org.jgroups.protocols.pbcast.NAKACK2] (testng-InfinispanNodeFailureTest) [InfinispanNodeFailureTest-NodeB-62629 setDigest()] existing digest: [] new digest: InfinispanNodeFailureTest-NodeA-7443: [0 (0)], InfinispanNodeFailureTest-NodeB-62629: [0 (0)] resulting digest: InfinispanNodeFailureTest-NodeA-7443: [0 (0)], InfinispanNodeFailureTest-NodeB-62629: [0 (0)] 14:17:30,155 DEBUG [org.jgroups.protocols.pbcast.GMS] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: installing view [InfinispanNodeFailureTest-NodeA-7443|1] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,155 TRACE [org.jgroups.protocols.pbcast.STABLE] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: reset digest to InfinispanNodeFailureTest-NodeA-7443: [-1], InfinispanNodeFailureTest-NodeB-62629: [-1] 14:17:30,155 TRACE [org.jgroups.protocols.tom.TOA] (testng-InfinispanNodeFailureTest) Handle view [InfinispanNodeFailureTest-NodeA-7443|1] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,155 TRACE [org.jgroups.protocols.MFC] (testng-InfinispanNodeFailureTest) new membership: [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,156 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (testng-InfinispanNodeFailureTest) Added a new task directly: 0 task(s) are waiting 14:17:30,156 DEBUG [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) New view accepted: [InfinispanNodeFailureTest-NodeA-7443|1] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,156 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) ISPN000094: Received new cluster view for channel ISPN: [InfinispanNodeFailureTest-NodeA-7443|1] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,156 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: flushing become_server_queue (1 elements) 14:17:30,157 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #2, conn_id=0) 14:17:30,157 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#1 14:17:30,157 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are GMS: GmsHeader[VIEW_ACK], UNICAST3: DATA, seqno=2, TP: [cluster_name=ISPN] 14:17:30,157 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#1-1 (1 messages) 14:17:30,157 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) ISPN000079: Channel ISPN local address is InfinispanNodeFailureTest-NodeB-62629, physical addresses are [127.0.0.1:7901] 14:17:30,157 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (67 bytes (335.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,157 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (70 bytes) 14:17:30,158 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) dests=[InfinispanNodeFailureTest-NodeA-7443], command=CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=-1}, mode=SYNCHRONOUS, timeout=24000 14:17:30,158 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Received new cluster view: 1, isCoordinator = false, old status = INITIALIZING 14:17:30,158 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (testng-InfinispanNodeFailureTest) Replication task sending CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=-1} to single recipient InfinispanNodeFailureTest-NodeA-7443 with response mode GET_ALL 14:17:30,158 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=0 bytes, flags=OOB|INTERNAL], headers are GMS: GmsHeader[VIEW_ACK], UNICAST3: DATA, seqno=2, TP: [cluster_name=ISPN] 14:17:30,158 TRACE [org.jgroups.protocols.UNICAST3] (INT-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #2, conn_id=0) 14:17:30,158 TRACE [org.jgroups.protocols.UNICAST3] (INT-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#2 14:17:30,158 TRACE [org.jgroups.protocols.pbcast.GMS] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: got all ACKs (1) from joiners for view [InfinispanNodeFailureTest-NodeA-7443|1] 14:17:30,158 TRACE [org.jgroups.protocols.pbcast.STABLE] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: reset digest to InfinispanNodeFailureTest-NodeA-7443: [-1], InfinispanNodeFailureTest-NodeB-62629: [-1] 14:17:30,158 DEBUG [org.jgroups.protocols.pbcast.STABLE] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) resuming message garbage collection 14:17:30,167 TRACE [org.jgroups.blocks.RequestCorrelator] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: invoking unicast RPC [req-id=1] on InfinispanNodeFailureTest-NodeA-7443 14:17:30,167 TRACE [org.jgroups.blocks.RequestCorrelator] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) InfinispanNodeFailureTest-NodeA-7443: invoking unicast RPC [req-id=2] on InfinispanNodeFailureTest-NodeB-62629 14:17:30,167 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #3, conn_id=0) 14:17:30,167 TRACE [org.jgroups.protocols.UNICAST3] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeB-62629: #2, conn_id=1) 14:17:30,167 TRACE [org.jgroups.protocols.TCP_NIO2] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeB-62629, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=2, rsp_expected=true, UNICAST3: DATA, seqno=2, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,167 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=1, rsp_expected=true, UNICAST3: DATA, seqno=3, TP: [cluster_name=ISPN] 14:17:30,168 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (80 bytes (400.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,168 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (80 bytes (400.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeB-62629] 14:17:30,168 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (83 bytes) 14:17:30,168 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (83 bytes) 14:17:30,169 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: InfinispanNodeFailureTest-NodeB-62629, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=8 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=2, rsp_expected=true, UNICAST3: DATA, seqno=2, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,169 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #2, conn_id=1) 14:17:30,169 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#2 14:17:30,169 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 2 14:17:30,170 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=8 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=1, rsp_expected=true, UNICAST3: DATA, seqno=3, TP: [cluster_name=ISPN] 14:17:30,170 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #3, conn_id=0) 14:17:30,170 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#3 14:17:30,170 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 1 14:17:30,180 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=0} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,180 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=0} [sender=InfinispanNodeFailureTest-NodeB-62629] 14:17:30,181 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,181 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,182 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t3) About to send back response SuccessfulResponse{responseValue=true} for command CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=0} 14:17:30,182 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t3) sending rsp for 1 to InfinispanNodeFailureTest-NodeB-62629 14:17:30,182 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t3) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeB-62629: #3, conn_id=1) 14:17:30,182 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t3) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeB-62629, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=1, rsp_expected=true, UNICAST3: DATA, seqno=3, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,182 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (77 bytes (385.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeB-62629] 14:17:30,183 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (80 bytes) 14:17:30,183 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: InfinispanNodeFailureTest-NodeB-62629, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=5 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=1, rsp_expected=true, UNICAST3: DATA, seqno=3, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,183 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #3, conn_id=1) 14:17:30,183 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#3 14:17:30,184 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Responses: sender=InfinispanNodeFailureTest-NodeA-7443value=SuccessfulResponse{responseValue=true} , received=true, suspected=false 14:17:30,184 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Starting LocalTopologyManager on InfinispanNodeFailureTest-NodeB-62629 14:17:30,184 DEBUG [org.infinispan.manager.DefaultCacheManager] (testng-InfinispanNodeFailureTest) Started cache manager ISPN on InfinispanNodeFailureTest-NodeB 14:17:30,187 DEBUG [org.infinispan.commons.util.ServiceFinder] (testng-InfinispanNodeFailureTest) No service impls found: ModuleLifecycle 14:17:30,192 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t2) About to send back response SuccessfulResponse{responseValue=true} for command CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=0} 14:17:30,192 DEBUG [org.infinispan.commons.util.ServiceFinder] (testng-InfinispanNodeFailureTest) Loading service impl: CoreTestMetadataFileFinder 14:17:30,192 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t2) sending rsp for 2 to InfinispanNodeFailureTest-NodeA-7443 14:17:30,192 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t2) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #4, conn_id=0) 14:17:30,192 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t2) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=2, rsp_expected=true, UNICAST3: DATA, seqno=4, TP: [cluster_name=ISPN] 14:17:30,193 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (77 bytes (385.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,193 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (80 bytes) 14:17:30,193 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=5 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=2, rsp_expected=true, UNICAST3: DATA, seqno=4, TP: [cluster_name=ISPN] 14:17:30,193 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #4, conn_id=0) 14:17:30,193 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#4 14:17:30,193 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) Responses: sender=InfinispanNodeFailureTest-NodeB-62629value=SuccessfulResponse{responseValue=true} , received=true, suspected=false 14:17:30,199 DEBUG [org.infinispan.commons.util.ServiceFinder] (testng-InfinispanNodeFailureTest) Loading service impl: TestModuleCommandExtensions 14:17:30,199 DEBUG [org.infinispan.util.ModuleProperties] (testng-InfinispanNodeFailureTest) Loading module command extension SPI class: org.infinispan.remoting.rpc.TestModuleCommandExtensions@5335158a 14:17:30,201 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterAvailability [r=true,w=false,is=false,type=java.lang.String] 14:17:30,201 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rebalancingEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,201 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictions [r=true,w=false,is=false,type=long] 14:17:30,201 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeMisses [r=true,w=false,is=false,type=long] 14:17:30,201 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceReset [r=true,w=false,is=false,type=long] 14:17:30,201 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,201 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hits [r=true,w=false,is=false,type=long] 14:17:30,201 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute misses [r=true,w=false,is=false,type=long] 14:17:30,201 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeHits [r=true,w=false,is=false,type=long] 14:17:30,201 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReadTime [r=true,w=false,is=false,type=long] 14:17:30,201 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageRemoveTime [r=true,w=false,is=false,type=long] 14:17:30,201 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageWriteTime [r=true,w=false,is=false,type=long] 14:17:30,201 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfEntries [r=true,w=false,is=false,type=int] 14:17:30,201 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stores [r=true,w=false,is=false,type=long] 14:17:30,201 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hitRatio [r=true,w=false,is=false,type=double] 14:17:30,201 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute readWriteRatio [r=true,w=false,is=false,type=double] 14:17:30,201 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute physicalAddresses [r=true,w=false,is=false,type=java.lang.String] 14:17:30,202 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheManagerStatus [r=true,w=false,is=false,type=java.lang.String] 14:17:30,202 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterSize [r=true,w=false,is=false,type=int] 14:17:30,202 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute runningCacheCount [r=true,w=false,is=false,type=java.lang.String] 14:17:30,202 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute globalConfigurationAsProperties [r=true,w=false,is=false,type=java.util.Properties] 14:17:30,202 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute coordinator [r=true,w=false,is=true,type=boolean] 14:17:30,202 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute definedCacheNames [r=true,w=false,is=false,type=java.lang.String] 14:17:30,202 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute name [r=true,w=false,is=false,type=java.lang.String] 14:17:30,202 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute createdCacheCount [r=true,w=false,is=false,type=java.lang.String] 14:17:30,202 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute definedCacheCount [r=true,w=false,is=false,type=java.lang.String] 14:17:30,202 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute nodeAddress [r=true,w=false,is=false,type=java.lang.String] 14:17:30,202 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterName [r=true,w=false,is=false,type=java.lang.String] 14:17:30,203 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterMembers [r=true,w=false,is=false,type=java.lang.String] 14:17:30,203 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute definedCacheConfigurationNames [r=true,w=false,is=false,type=java.lang.String] 14:17:30,203 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute coordinatorAddress [r=true,w=false,is=false,type=java.lang.String] 14:17:30,203 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute version [r=true,w=false,is=false,type=java.lang.String] 14:17:30,203 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void startCache 14:17:30,203 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void startCache 14:17:30,203 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String pushState 14:17:30,203 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String bringSiteOnline 14:17:30,204 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String cancelPushState 14:17:30,204 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String takeSiteOffline 14:17:30,204 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@5df05465 under infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=CacheManager,name="DefaultCacheManager",component=LocalTopologyManager 14:17:30,204 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@617e7af0 under infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=CacheManager,name="DefaultCacheManager",component=CacheContainerStats 14:17:30,204 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@13a949d9 under infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=CacheManager,name="DefaultCacheManager",component=CacheManager 14:17:30,205 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@6d8ac012 under infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=CacheManager,name="DefaultCacheManager",component=GlobalXSiteAdminOperations 14:17:30,207 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) ISPN000078: Starting JGroups channel ISPN 14:17:30,213 DEBUG [org.jgroups.stack.Configurator] (testng-InfinispanNodeFailureTest) set property TCP_NIO2.diagnostics_addr to default value /224.0.75.75 14:17:30,214 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (testng-InfinispanNodeFailureTest) null: set max_xmit_req_size from 0 to 247600 14:17:30,215 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) null: set max_xmit_req_size from 0 to 247600 14:17:30,215 TRACE [org.jgroups.blocks.MessageDispatcher] (testng-InfinispanNodeFailureTest) setting local_addr (null) to InfinispanNodeFailureTest-NodeC-7981 14:17:30,215 TRACE [org.jgroups.protocols.pbcast.STABLE] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: stable task started 14:17:30,215 TRACE [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Discovery.stopped=false 14:17:30,216 DEBUG [org.jgroups.protocols.pbcast.GMS] (testng-InfinispanNodeFailureTest) address=InfinispanNodeFailureTest-NodeC-7981, cluster=ISPN, physical address=127.0.0.1:7902 14:17:30,216 TRACE [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Added discovery for InfinispanNodeFailureTest-NodeC-7981. Registered discoveries: {InfinispanNodeFailureTest-NodeA-7443=TEST_PING@InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629=TEST_PING@InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981=TEST_PING@InfinispanNodeFailureTest-NodeC-7981} 14:17:30,216 TRACE [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Map InfinispanNodeFailureTest-NodeA-7443 with physical address 127.0.0.1:7900 in TEST_PING@InfinispanNodeFailureTest-NodeC-7981 14:17:30,217 TRACE [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Map InfinispanNodeFailureTest-NodeC-7981 with physical address 127.0.0.1:7902 in TEST_PING@InfinispanNodeFailureTest-NodeA-7443 14:17:30,217 TRACE [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Returning ping rsp: InfinispanNodeFailureTest-NodeA-7443, name=InfinispanNodeFailureTest-NodeA-7443, addr=127.0.0.1:7900, coord 14:17:30,217 TRACE [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Map InfinispanNodeFailureTest-NodeB-62629 with physical address 127.0.0.1:7901 in TEST_PING@InfinispanNodeFailureTest-NodeC-7981 14:17:30,217 TRACE [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Map InfinispanNodeFailureTest-NodeC-7981 with physical address 127.0.0.1:7902 in TEST_PING@InfinispanNodeFailureTest-NodeB-62629 14:17:30,217 TRACE [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Returning ping rsp: InfinispanNodeFailureTest-NodeB-62629, name=InfinispanNodeFailureTest-NodeB-62629, addr=127.0.0.1:7901, server 14:17:30,217 TRACE [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Skipping sending discovery to self 14:17:30,217 TRACE [org.jgroups.protocols.pbcast.GMS] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: discovery took 1 ms, members: 2 rsps (1 coords) [done] 14:17:30,217 DEBUG [org.jgroups.protocols.pbcast.GMS] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: sending JOIN(InfinispanNodeFailureTest-NodeC-7981) to InfinispanNodeFailureTest-NodeA-7443 14:17:30,217 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: created sender window for InfinispanNodeFailureTest-NodeA-7443 (conn-id=0) 14:17:30,217 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #1, conn_id=0, first) 14:17:30,217 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeC-7981, headers are GMS: GmsHeader[JOIN_REQ]: mbr=InfinispanNodeFailureTest-NodeC-7981, UNICAST3: DATA, seqno=1, first, TP: [cluster_name=ISPN] 14:17:30,218 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: sending 1 msgs (83 bytes (415.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,218 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) dest=127.0.0.1:7900 (86 bytes) 14:17:30,225 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) 127.0.0.1:7902: connecting to 127.0.0.1:7900 14:17:30,226 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeC-7981 (3 headers), size=0 bytes, flags=OOB|INTERNAL], headers are GMS: GmsHeader[JOIN_REQ]: mbr=InfinispanNodeFailureTest-NodeC-7981, UNICAST3: DATA, seqno=1, first, TP: [cluster_name=ISPN] 14:17:30,226 TRACE [org.jgroups.protocols.UNICAST3] (INT-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeC-7981: #1, conn_id=0, first) 14:17:30,226 TRACE [org.jgroups.protocols.UNICAST3] (INT-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: created receiver window for InfinispanNodeFailureTest-NodeC-7981 at seqno=#1 for conn-id=0 14:17:30,226 TRACE [org.jgroups.protocols.UNICAST3] (INT-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeC-7981#1 14:17:30,226 TRACE [org.jgroups.protocols.pbcast.GMS] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: joiners=[InfinispanNodeFailureTest-NodeC-7981], suspected=[], leaving=[], new view: [InfinispanNodeFailureTest-NodeA-7443|2] (3) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,226 DEBUG [org.jgroups.protocols.pbcast.STABLE] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) suspending message garbage collection 14:17:30,226 DEBUG [org.jgroups.protocols.pbcast.STABLE] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: resume task started, max_suspend_time=33000 14:17:30,227 TRACE [org.jgroups.protocols.pbcast.GMS] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: mcasting view [InfinispanNodeFailureTest-NodeA-7443|2] (3) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] (3 mbrs) 14:17:30,227 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#2 14:17:30,228 TRACE [org.jgroups.protocols.TCP_NIO2] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are GMS: GmsHeader[VIEW], NAKACK2: [MSG, seqno=2], TP: [cluster_name=ISPN] 14:17:30,228 TRACE [org.jgroups.protocols.TCP_NIO2] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: looping back message [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=61 bytes] 14:17:30,228 TRACE [org.jgroups.protocols.TCP_NIO2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=61 bytes], headers are GMS: GmsHeader[VIEW], NAKACK2: [MSG, seqno=2], TP: [cluster_name=ISPN] 14:17:30,228 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received InfinispanNodeFailureTest-NodeA-7443#2 14:17:30,228 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (112 bytes (560.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,228 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeA-7443#2-2 (1 messages) 14:17:30,228 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (115 bytes) 14:17:30,228 TRACE [org.jgroups.protocols.pbcast.GMS] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received delta view [InfinispanNodeFailureTest-NodeA-7443|2], ref-view=[InfinispanNodeFailureTest-NodeA-7443|1], joined=[InfinispanNodeFailureTest-NodeC-7981] 14:17:30,228 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (115 bytes) 14:17:30,228 DEBUG [org.jgroups.protocols.pbcast.GMS] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: installing view [InfinispanNodeFailureTest-NodeA-7443|2] (3) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,229 TRACE [org.jgroups.protocols.pbcast.STABLE] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: reset digest to InfinispanNodeFailureTest-NodeA-7443: [-1], InfinispanNodeFailureTest-NodeB-62629: [-1], InfinispanNodeFailureTest-NodeC-7981: [-1] 14:17:30,229 TRACE [org.jgroups.protocols.tom.TOA] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) Handle view [InfinispanNodeFailureTest-NodeA-7443|2] (3) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,229 TRACE [org.jgroups.protocols.MFC] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) new membership: [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,229 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,229 DEBUG [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) New view accepted: [InfinispanNodeFailureTest-NodeA-7443|2] (3) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,229 DEBUG [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) Joined: [InfinispanNodeFailureTest-NodeC-7981], Left: [] 14:17:30,229 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) ISPN000094: Received new cluster view for channel ISPN: [InfinispanNodeFailureTest-NodeA-7443|2] (3) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,230 TRACE [org.jgroups.protocols.TCP_NIO2] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=61 bytes], headers are GMS: GmsHeader[VIEW], NAKACK2: [MSG, seqno=2], TP: [cluster_name=ISPN] 14:17:30,230 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#2 14:17:30,230 INFO [org.infinispan.CLUSTER] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) ISPN100000: Node InfinispanNodeFailureTest-NodeC-7981 joined the cluster 14:17:30,230 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#2-2 (1 messages) 14:17:30,230 TRACE [org.jgroups.protocols.TCP_NIO2] (Incoming-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=61 bytes], headers are GMS: GmsHeader[VIEW], NAKACK2: [MSG, seqno=2], TP: [cluster_name=ISPN] 14:17:30,230 TRACE [org.jgroups.protocols.pbcast.GMS] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received delta view [InfinispanNodeFailureTest-NodeA-7443|2], ref-view=[InfinispanNodeFailureTest-NodeA-7443|1], joined=[InfinispanNodeFailureTest-NodeC-7981] 14:17:30,230 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: message InfinispanNodeFailureTest-NodeA-7443::2 was added to queue (not yet server) 14:17:30,230 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Received new cluster view: 2, isCoordinator = true, old status = COORDINATOR 14:17:30,230 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Updating cluster members for all the caches. New list is [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,230 DEBUG [org.jgroups.protocols.pbcast.GMS] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: installing view [InfinispanNodeFailureTest-NodeA-7443|2] (3) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,230 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) dests=null, command=CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=-1}, mode=SYNCHRONOUS, timeout=240000 14:17:30,230 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Replication task sending CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=-1} to addresses null with response mode GET_ALL 14:17:30,231 TRACE [org.jgroups.protocols.pbcast.STABLE] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: reset digest to InfinispanNodeFailureTest-NodeA-7443: [-1], InfinispanNodeFailureTest-NodeB-62629: [-1], InfinispanNodeFailureTest-NodeC-7981: [-1] 14:17:30,231 TRACE [org.jgroups.protocols.tom.TOA] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) Handle view [InfinispanNodeFailureTest-NodeA-7443|2] (3) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,231 TRACE [org.jgroups.protocols.MFC] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) new membership: [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,231 TRACE [org.jgroups.protocols.UNICAST3] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #2, conn_id=0) 14:17:30,231 TRACE [org.jgroups.protocols.TCP_NIO2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeA-7443, headers are GMS: GmsHeader[VIEW_ACK], UNICAST3: DATA, seqno=2, TP: [cluster_name=ISPN] 14:17:30,231 TRACE [org.jgroups.protocols.TCP_NIO2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: looping back message [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=0 bytes, flags=OOB|INTERNAL] 14:17:30,232 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=0 bytes, flags=OOB|INTERNAL], headers are GMS: GmsHeader[VIEW_ACK], UNICAST3: DATA, seqno=2, TP: [cluster_name=ISPN] 14:17:30,232 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,232 TRACE [org.jgroups.protocols.UNICAST3] (INT-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #2, conn_id=0) 14:17:30,232 DEBUG [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) New view accepted: [InfinispanNodeFailureTest-NodeA-7443|2] (3) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,232 TRACE [org.jgroups.protocols.UNICAST3] (INT-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeA-7443#2 14:17:30,232 DEBUG [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) Joined: [InfinispanNodeFailureTest-NodeC-7981], Left: [] 14:17:30,232 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) ISPN000094: Received new cluster view for channel ISPN: [InfinispanNodeFailureTest-NodeA-7443|2] (3) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,232 TRACE [org.jgroups.blocks.RequestCorrelator] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) InfinispanNodeFailureTest-NodeA-7443: invoking multicast RPC [req-id=3] 14:17:30,232 INFO [org.infinispan.CLUSTER] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) ISPN100000: Node InfinispanNodeFailureTest-NodeC-7981 joined the cluster 14:17:30,232 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Received new cluster view: 2, isCoordinator = false, old status = REGULAR_MEMBER 14:17:30,232 TRACE [org.jgroups.protocols.UNICAST3] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #5, conn_id=0) 14:17:30,232 TRACE [org.jgroups.protocols.TCP_NIO2] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are GMS: GmsHeader[VIEW_ACK], UNICAST3: DATA, seqno=5, TP: [cluster_name=ISPN] 14:17:30,233 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#3 14:17:30,233 TRACE [org.jgroups.protocols.TCP_NIO2] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=3, rsp_expected=true, NAKACK2: [MSG, seqno=3], TP: [cluster_name=ISPN] 14:17:30,233 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (67 bytes (335.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,233 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (70 bytes) 14:17:30,233 TRACE [org.jgroups.protocols.MFC] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) InfinispanNodeFailureTest-NodeA-7443 used 8 credits, 1999992 remaining 14:17:30,233 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (60 bytes (300.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,233 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (63 bytes) 14:17:30,233 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (63 bytes) 14:17:30,233 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=0 bytes, flags=OOB|INTERNAL], headers are GMS: GmsHeader[VIEW_ACK], UNICAST3: DATA, seqno=5, TP: [cluster_name=ISPN] 14:17:30,233 TRACE [org.jgroups.protocols.UNICAST3] (INT-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #5, conn_id=0) 14:17:30,233 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=8 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=3, rsp_expected=true, NAKACK2: [MSG, seqno=3], TP: [cluster_name=ISPN] 14:17:30,233 TRACE [org.jgroups.protocols.UNICAST3] (INT-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#5 14:17:30,233 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#3 14:17:30,233 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#3 14:17:30,233 TRACE [org.jgroups.protocols.pbcast.GMS] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: got all ACKs (2) from members for view [InfinispanNodeFailureTest-NodeA-7443|2] 14:17:30,234 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 3 14:17:30,234 TRACE [org.jgroups.protocols.UNICAST3] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: created sender window for InfinispanNodeFailureTest-NodeC-7981 (conn-id=2) 14:17:30,234 TRACE [org.jgroups.protocols.UNICAST3] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeC-7981: #1, conn_id=2, first) 14:17:30,234 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=8 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=3, rsp_expected=true, NAKACK2: [MSG, seqno=3], TP: [cluster_name=ISPN] 14:17:30,234 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=0} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,234 TRACE [org.jgroups.protocols.TCP_NIO2] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeC-7981, src=InfinispanNodeFailureTest-NodeA-7443, headers are GMS: GmsHeader[JOIN_RSP], UNICAST3: DATA, seqno=1, conn_id=2, first, TP: [cluster_name=ISPN] 14:17:30,234 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: message InfinispanNodeFailureTest-NodeA-7443::3 was added to queue (not yet server) 14:17:30,234 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,234 TRACE [org.jgroups.protocols.MFC] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeA-7443 used 8 credits, 1999992 remaining 14:17:30,234 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t4) About to send back response SuccessfulResponse{responseValue=true} for command CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=0} 14:17:30,234 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (152 bytes (760.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeC-7981] 14:17:30,234 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t4) sending rsp for 3 to InfinispanNodeFailureTest-NodeA-7443 14:17:30,234 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (155 bytes) 14:17:30,234 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t4) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #6, conn_id=0) 14:17:30,234 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t4) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=3, rsp_expected=true, UNICAST3: DATA, seqno=6, TP: [cluster_name=ISPN] 14:17:30,234 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (77 bytes (385.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,234 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (80 bytes) 14:17:30,235 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=5 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=3, rsp_expected=true, UNICAST3: DATA, seqno=6, TP: [cluster_name=ISPN] 14:17:30,235 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #6, conn_id=0) 14:17:30,235 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#6 14:17:30,235 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: InfinispanNodeFailureTest-NodeC-7981, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=81 bytes, flags=OOB|INTERNAL], headers are GMS: GmsHeader[JOIN_RSP], UNICAST3: DATA, seqno=1, conn_id=2, first, TP: [cluster_name=ISPN] 14:17:30,236 TRACE [org.jgroups.protocols.UNICAST3] (INT-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #1, conn_id=2, first) 14:17:30,236 TRACE [org.jgroups.protocols.UNICAST3] (INT-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: created receiver window for InfinispanNodeFailureTest-NodeA-7443 at seqno=#1 for conn-id=2 14:17:30,236 TRACE [org.jgroups.protocols.UNICAST3] (INT-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#1 14:17:30,236 DEBUG [org.jgroups.protocols.pbcast.NAKACK2] (testng-InfinispanNodeFailureTest) [InfinispanNodeFailureTest-NodeC-7981 setDigest()] existing digest: [] new digest: InfinispanNodeFailureTest-NodeA-7443: [1 (1)], InfinispanNodeFailureTest-NodeB-62629: [0 (0)], InfinispanNodeFailureTest-NodeC-7981: [0 (0)] resulting digest: InfinispanNodeFailureTest-NodeA-7443: [1 (1)], InfinispanNodeFailureTest-NodeB-62629: [0 (0)], InfinispanNodeFailureTest-NodeC-7981: [0 (0)] 14:17:30,236 DEBUG [org.jgroups.protocols.pbcast.GMS] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: installing view [InfinispanNodeFailureTest-NodeA-7443|2] (3) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,236 TRACE [org.jgroups.protocols.pbcast.STABLE] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: reset digest to InfinispanNodeFailureTest-NodeA-7443: [-1], InfinispanNodeFailureTest-NodeB-62629: [-1], InfinispanNodeFailureTest-NodeC-7981: [-1] 14:17:30,236 TRACE [org.jgroups.protocols.tom.TOA] (testng-InfinispanNodeFailureTest) Handle view [InfinispanNodeFailureTest-NodeA-7443|2] (3) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,236 TRACE [org.jgroups.protocols.MFC] (testng-InfinispanNodeFailureTest) new membership: [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,237 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (testng-InfinispanNodeFailureTest) Added a new task directly: 0 task(s) are waiting 14:17:30,237 DEBUG [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) New view accepted: [InfinispanNodeFailureTest-NodeA-7443|2] (3) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,237 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) ISPN000094: Received new cluster view for channel ISPN: [InfinispanNodeFailureTest-NodeA-7443|2] (3) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,237 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: flushing become_server_queue (2 elements) 14:17:30,237 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #2, conn_id=0) 14:17:30,237 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received InfinispanNodeFailureTest-NodeA-7443#2 14:17:30,237 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received InfinispanNodeFailureTest-NodeA-7443#3 14:17:30,237 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeC-7981, headers are GMS: GmsHeader[VIEW_ACK], UNICAST3: DATA, seqno=2, TP: [cluster_name=ISPN] 14:17:30,237 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#3 14:17:30,237 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#2-2 (1 messages) 14:17:30,237 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) ISPN000079: Channel ISPN local address is InfinispanNodeFailureTest-NodeC-7981, physical addresses are [127.0.0.1:7902] 14:17:30,237 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 3 14:17:30,237 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: sending 1 msgs (67 bytes (335.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,237 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=0} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,238 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) dest=127.0.0.1:7900 (70 bytes) 14:17:30,238 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Added a new task directly: 0 task(s) are waiting 14:17:30,238 TRACE [org.jgroups.protocols.MFC] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeA-7443 used 8 credits, 1999992 remaining 14:17:30,238 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) dests=[InfinispanNodeFailureTest-NodeA-7443], command=CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=InfinispanNodeFailureTest-NodeC-7981, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=-1}, mode=SYNCHRONOUS, timeout=24000 14:17:30,238 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (testng-InfinispanNodeFailureTest) Replication task sending CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=InfinispanNodeFailureTest-NodeC-7981, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=-1} to single recipient InfinispanNodeFailureTest-NodeA-7443 with response mode GET_ALL 14:17:30,238 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeC-7981 (3 headers), size=0 bytes, flags=OOB|INTERNAL], headers are GMS: GmsHeader[VIEW_ACK], UNICAST3: DATA, seqno=2, TP: [cluster_name=ISPN] 14:17:30,238 TRACE [org.jgroups.blocks.RequestCorrelator] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: invoking unicast RPC [req-id=4] on InfinispanNodeFailureTest-NodeA-7443 14:17:30,238 TRACE [org.jgroups.protocols.UNICAST3] (INT-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeC-7981: #2, conn_id=0) 14:17:30,238 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #3, conn_id=0) 14:17:30,238 TRACE [org.jgroups.protocols.UNICAST3] (INT-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeC-7981#2 14:17:30,238 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeC-7981, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=4, rsp_expected=true, UNICAST3: DATA, seqno=3, TP: [cluster_name=ISPN] 14:17:30,238 TRACE [org.jgroups.protocols.pbcast.GMS] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: got all ACKs (1) from joiners for view [InfinispanNodeFailureTest-NodeA-7443|2] 14:17:30,238 TRACE [org.jgroups.protocols.pbcast.STABLE] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: reset digest to InfinispanNodeFailureTest-NodeA-7443: [-1], InfinispanNodeFailureTest-NodeB-62629: [-1], InfinispanNodeFailureTest-NodeC-7981: [-1] 14:17:30,238 DEBUG [org.jgroups.protocols.pbcast.STABLE] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) resuming message garbage collection 14:17:30,238 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: sending 1 msgs (80 bytes (400.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,238 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t1) Received new cluster view: 2, isCoordinator = false, old status = INITIALIZING 14:17:30,238 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) dest=127.0.0.1:7900 (83 bytes) 14:17:30,239 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeC-7981 (3 headers), size=8 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=4, rsp_expected=true, UNICAST3: DATA, seqno=3, TP: [cluster_name=ISPN] 14:17:30,239 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeC-7981: #3, conn_id=0) 14:17:30,239 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeC-7981#3 14:17:30,239 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 4 14:17:30,239 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=0} [sender=InfinispanNodeFailureTest-NodeC-7981] 14:17:30,239 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,239 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) About to send back response SuccessfulResponse{responseValue=true} for command CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=0} 14:17:30,239 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) sending rsp for 4 to InfinispanNodeFailureTest-NodeC-7981 14:17:30,240 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeC-7981: #2, conn_id=2) 14:17:30,240 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeC-7981, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=4, rsp_expected=true, UNICAST3: DATA, seqno=2, conn_id=2, TP: [cluster_name=ISPN] 14:17:30,240 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (77 bytes (385.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeC-7981] 14:17:30,240 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (80 bytes) 14:17:30,240 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: InfinispanNodeFailureTest-NodeC-7981, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=5 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=4, rsp_expected=true, UNICAST3: DATA, seqno=2, conn_id=2, TP: [cluster_name=ISPN] 14:17:30,240 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #2, conn_id=2) 14:17:30,240 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#2 14:17:30,240 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Responses: sender=InfinispanNodeFailureTest-NodeA-7443value=SuccessfulResponse{responseValue=true} , received=true, suspected=false 14:17:30,240 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Starting LocalTopologyManager on InfinispanNodeFailureTest-NodeC-7981 14:17:30,240 DEBUG [org.infinispan.manager.DefaultCacheManager] (testng-InfinispanNodeFailureTest) Started cache manager ISPN on InfinispanNodeFailureTest-NodeC 14:17:30,241 DEBUG [org.infinispan.tx.InfinispanNodeFailureTest] (testng-InfinispanNodeFailureTest) Cache managers created, ready to start the test 14:17:30,241 DEBUG [org.infinispan.commons.test.TestNGTestListener] (testng-InfinispanNodeFailureTest) After setup createBeforeClass 14:17:30,242 DEBUG [org.infinispan.commons.test.TestNGTestListener] (testng-InfinispanNodeFailureTest) Before setup createBeforeMethod 14:17:30,242 DEBUG [org.infinispan.commons.test.TestNGTestListener] (testng-InfinispanNodeFailureTest) After setup createBeforeMethod [TestSuiteProgress] Test starting: org.infinispan.tx.InfinispanNodeFailureTest.killedNodeDoesNotBreakReplaceCommand 14:17:30,243 INFO [org.infinispan.commons.test.TestSuiteProgress] (testng-InfinispanNodeFailureTest) Test starting: org.infinispan.tx.InfinispanNodeFailureTest.killedNodeDoesNotBreakReplaceCommand 14:17:30,244 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t2) About to send back response SuccessfulResponse{responseValue=true} for command CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=0} 14:17:30,245 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t2) sending rsp for 3 to InfinispanNodeFailureTest-NodeA-7443 14:17:30,245 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t2) InfinispanNodeFailureTest-NodeC-7981 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #4, conn_id=0) 14:17:30,245 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t2) InfinispanNodeFailureTest-NodeC-7981: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeC-7981, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=3, rsp_expected=true, UNICAST3: DATA, seqno=4, TP: [cluster_name=ISPN] 14:17:30,245 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: sending 1 msgs (77 bytes (385.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,245 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) dest=127.0.0.1:7900 (80 bytes) 14:17:30,246 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeC-7981 (3 headers), size=5 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=3, rsp_expected=true, UNICAST3: DATA, seqno=4, TP: [cluster_name=ISPN] 14:17:30,246 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeC-7981: #4, conn_id=0) 14:17:30,246 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeC-7981#4 14:17:30,247 TRACE [org.infinispan.manager.DefaultCacheManager] (testng-InfinispanNodeFailureTest) About to wire and start cache test_cache 14:17:30,247 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) Responses: Responses{ InfinispanNodeFailureTest-NodeB-62629: sender=InfinispanNodeFailureTest-NodeB-62629value=SuccessfulResponse{responseValue=true} , received=true, suspected=false InfinispanNodeFailureTest-NodeC-7981: sender=InfinispanNodeFailureTest-NodeC-7981value=SuccessfulResponse{responseValue=true} , received=true, suspected=false} 14:17:30,339 TRACE [org.infinispan.factories.InterceptorChainFactory] (testng-InfinispanNodeFailureTest) Finished building default interceptor chain. 14:17:30,360 DEBUG [org.infinispan.commons.util.ServiceFinder] (testng-InfinispanNodeFailureTest) No service impls found: FilterIndexingServiceProvider 14:17:30,364 DEBUG [org.infinispan.interceptors.impl.AsyncInterceptorChainImpl] (testng-InfinispanNodeFailureTest) Interceptor chain size: 11 14:17:30,364 DEBUG [org.infinispan.interceptors.impl.AsyncInterceptorChainImpl] (testng-InfinispanNodeFailureTest) Interceptor chain is: >> org.infinispan.interceptors.distribution.DistributionBulkInterceptor@378ba1b1 >> org.infinispan.interceptors.impl.InvocationContextInterceptor@59332f64 >> org.infinispan.interceptors.impl.CacheMgmtInterceptor@6a121857 >> org.infinispan.statetransfer.StateTransferInterceptor@53330666 >> org.infinispan.statetransfer.TransactionSynchronizerInterceptor@395fb398 >> org.infinispan.interceptors.impl.TxInterceptor@79d7f3d8 >> org.infinispan.interceptors.locking.PessimisticLockingInterceptor@37d9c15 >> org.infinispan.interceptors.impl.NotificationInterceptor@4836ba2a >> org.infinispan.interceptors.impl.EntryWrappingInterceptor@c3113c7 >> org.infinispan.interceptors.distribution.TxDistributionInterceptor@4a3e68aa >> org.infinispan.interceptors.impl.CallInterceptor@25ccc8cc 14:17:30,367 TRACE [org.infinispan.transaction.xa.TransactionFactory] (testng-InfinispanNodeFailureTest) Setting factory enum to NODLD_NORECOVERY_XA 14:17:30,369 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute passivations [r=true,w=false,is=false,type=long] 14:17:30,369 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute readWriteRatio [r=true,w=false,is=false,type=double] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hits [r=true,w=false,is=false,type=long] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheLoaderLoads [r=true,w=false,is=false,type=long] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute misses [r=true,w=false,is=false,type=long] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksAvailable [r=true,w=false,is=false,type=int] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stores [r=true,w=false,is=false,type=long] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceStart [r=true,w=false,is=false,type=long] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReadTime [r=true,w=false,is=false,type=long] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictions [r=true,w=false,is=false,type=long] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hitRatio [r=true,w=false,is=false,type=double] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeHits [r=true,w=false,is=false,type=long] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksHeld [r=true,w=false,is=false,type=int] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeMisses [r=true,w=false,is=false,type=long] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceReset [r=true,w=false,is=false,type=long] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute invalidations [r=true,w=false,is=false,type=long] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageWriteTime [r=true,w=false,is=false,type=long] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheLoaderMisses [r=true,w=false,is=false,type=long] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute storeWrites [r=true,w=false,is=false,type=long] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfEntries [r=true,w=false,is=false,type=int] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageRemoveTime [r=true,w=false,is=false,type=long] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute activations [r=true,w=false,is=false,type=long] 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void setStaleStatsTreshold 14:17:30,370 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=false,type=boolean] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute activations [r=true,w=false,is=false,type=java.lang.String] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute joinComplete [r=true,w=false,is=true,type=boolean] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stateTransferInProgress [r=true,w=false,is=true,type=boolean] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rebalancingStatus [r=true,w=false,is=false,type=java.lang.String] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute replicationFailures [r=true,w=false,is=false,type=long] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute committedViewAsString [r=true,w=false,is=false,type=java.lang.String] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReplicationTime [r=true,w=false,is=false,type=long] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute successRatioFloatingPoint [r=true,w=false,is=false,type=double] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute replicationCount [r=true,w=false,is=false,type=long] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute successRatio [r=true,w=false,is=false,type=java.lang.String] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute pendingViewAsString [r=true,w=false,is=false,type=java.lang.String] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void setStatisticsEnabled 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheStatus [r=true,w=false,is=false,type=java.lang.String] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rebalancingEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheAvailability [r=true,w=true,is=false,type=java.lang.String] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute configurationAsProperties [r=true,w=false,is=false,type=java.util.Properties] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheName [r=true,w=false,is=false,type=java.lang.String] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute version [r=true,w=false,is=false,type=java.lang.String] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void clearOperation 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void start 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void stop 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void shutdown 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheStatus [r=true,w=false,is=false,type=java.lang.String] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rebalancingEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheAvailability [r=true,w=true,is=false,type=java.lang.String] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute configurationAsProperties [r=true,w=false,is=false,type=java.util.Properties] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheName [r=true,w=false,is=false,type=java.lang.String] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute version [r=true,w=false,is=false,type=java.lang.String] 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void clearOperation 14:17:30,371 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void start 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void stop 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void shutdown 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation boolean isLocatedLocally 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.util.List locateKey 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation boolean isAffectedByRehash 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rollbacks [r=true,w=false,is=false,type=long] 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute prepares [r=true,w=false,is=false,type=long] 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute commits [r=true,w=false,is=false,type=long] 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation long synchronizeData 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void disconnectSource 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void recordKnownGlobalKeyset 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation long synchronizeData 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute passivations [r=true,w=false,is=false,type=long] 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute concurrencyLevel [r=true,w=false,is=false,type=int] 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksAvailable [r=true,w=false,is=false,type=int] 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksHeld [r=true,w=false,is=false,type=int] 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictionSize [r=true,w=true,is=false,type=long] 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute elapsedTime [r=true,w=false,is=false,type=long] 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hitRatio [r=true,w=false,is=false,type=double] 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=false,type=boolean] 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute readWriteRatio [r=true,w=false,is=false,type=double] 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfEntries [r=true,w=false,is=false,type=int] 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceStart [r=true,w=false,is=false,type=long] 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeMisses [r=true,w=false,is=false,type=long] 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute misses [r=true,w=false,is=false,type=long] 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stores [r=true,w=false,is=false,type=long] 14:17:30,372 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceReset [r=true,w=false,is=false,type=long] 14:17:30,373 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hits [r=true,w=false,is=false,type=long] 14:17:30,373 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageWriteTime [r=true,w=false,is=false,type=long] 14:17:30,373 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageRemoveTime [r=true,w=false,is=false,type=long] 14:17:30,373 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeHits [r=true,w=false,is=false,type=long] 14:17:30,373 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictions [r=true,w=false,is=false,type=long] 14:17:30,373 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReadTime [r=true,w=false,is=false,type=long] 14:17:30,373 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,373 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@7690d4e under infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=ClusterCacheStats 14:17:30,373 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@8cd8dbf under infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Activation 14:17:30,373 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@557acb82 under infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=StateTransferManager 14:17:30,373 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@5d6caf9 under infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=RpcManager 14:17:30,373 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@4a9aa470 under infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Cache 14:17:30,373 DEBUG [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Object name infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Cache already registered 14:17:30,373 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@2e5e68e9 under infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=DistributionManager 14:17:30,373 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@2d7b807b under infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Transactions 14:17:30,373 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@1ec9162f under infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=RollingUpgradeManager 14:17:30,373 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@16f98a2 under infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Passivation 14:17:30,373 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@5c2bb146 under infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=LockManager 14:17:30,373 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@2d228ae0 under infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Configuration 14:17:30,374 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@7b8c8905 under infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Statistics 14:17:30,374 TRACE [org.infinispan.jmx.CacheJmxRegistration] (testng-InfinispanNodeFailureTest) ISPN000031: MBeans were successfully registered to the platform MBean server. 14:17:30,374 TRACE [org.infinispan.distribution.impl.DistributionManagerImpl] (testng-InfinispanNodeFailureTest) starting distribution manager on InfinispanNodeFailureTest-NodeA-7443 14:17:30,374 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (testng-InfinispanNodeFailureTest) Starting StateTransferManager of cache test_cache on node InfinispanNodeFailureTest-NodeA-7443 14:17:30,376 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Node InfinispanNodeFailureTest-NodeA-7443 joining cache test_cache 14:17:30,376 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Attempting to execute command on self: CacheTopologyControlCommand{cache=test_cache, type=JOIN, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=CacheJoinInfo{consistentHashFactory=org.infinispan.distribution.ch.impl.SyncReplicatedConsistentHashFactory@21cc620d, hashFunction=MurmurHash3, numSegments=60, numOwners=2, timeout=240000, totalOrder=false, distributed=false, persistentUUID=ae38fac5-4aa9-40ca-8897-ff2ec8e65091, persistentStateChecksum=Optional.empty}, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} 14:17:30,380 TRACE [org.infinispan.topology.ClusterCacheStatus] (testng-InfinispanNodeFailureTest) Cache test_cache initialized. Persisted state? false 14:17:30,381 TRACE [org.infinispan.topology.ClusterCacheStatus] (testng-InfinispanNodeFailureTest) Added joiner InfinispanNodeFailureTest-NodeA-7443 to cache test_cache with persistent uuid ae38fac5-4aa9-40ca-8897-ff2ec8e65091: members = [InfinispanNodeFailureTest-NodeA-7443], joiners = [InfinispanNodeFailureTest-NodeA-7443] 14:17:30,381 TRACE [org.infinispan.topology.ClusterCacheStatus] (testng-InfinispanNodeFailureTest) Initializing status for cache test_cache 14:17:30,384 TRACE [org.infinispan.topology.ClusterCacheStatus] (testng-InfinispanNodeFailureTest) Cache test_cache topology updated: CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091]}, members = [InfinispanNodeFailureTest-NodeA-7443], joiners = [] 14:17:30,384 TRACE [org.infinispan.topology.CacheTopology] (testng-InfinispanNodeFailureTest) Current consistent hash's routing table: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 14:17:30,384 TRACE [org.infinispan.topology.ClusterCacheStatus] (testng-InfinispanNodeFailureTest) Cache test_cache stable topology updated: members = [InfinispanNodeFailureTest-NodeA-7443], joiners = [], topology = CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091]} 14:17:30,384 DEBUG [org.infinispan.topology.ClusterTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Updating cluster-wide stable topology for cache test_cache, topology = CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091]} 14:17:30,385 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) dests=null, command=CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], throwable=null, viewId=2}, mode=ASYNCHRONOUS, timeout=240000 14:17:30,385 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (testng-InfinispanNodeFailureTest) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], throwable=null, viewId=2} to addresses null with response mode GET_NONE 14:17:30,385 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t5) Attempting to execute command on self: CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], throwable=null, viewId=2} 14:17:30,391 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#4 14:17:30,391 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=4], TP: [cluster_name=ISPN] 14:17:30,391 TRACE [org.jgroups.protocols.MFC] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443 used 510 credits, 1999482 remaining 14:17:30,391 DEBUG [org.infinispan.topology.ClusterCacheStatus] (testng-InfinispanNodeFailureTest) Queueing rebalance for cache test_cache with members [InfinispanNodeFailureTest-NodeA-7443] 14:17:30,391 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (561 bytes (2805.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,392 TRACE [org.infinispan.topology.ClusterCacheStatus] (testng-InfinispanNodeFailureTest) Rebalancing consistent hash for cache test_cache, members are [InfinispanNodeFailureTest-NodeA-7443] 14:17:30,392 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (564 bytes) 14:17:30,392 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (564 bytes) 14:17:30,392 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=510 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=4], TP: [cluster_name=ISPN] 14:17:30,393 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=510 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=4], TP: [cluster_name=ISPN] 14:17:30,393 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#4 14:17:30,393 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#4 14:17:30,393 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received InfinispanNodeFailureTest-NodeA-7443#4 14:17:30,393 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#4 14:17:30,393 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,393 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,394 TRACE [org.infinispan.topology.ClusterCacheStatus] (testng-InfinispanNodeFailureTest) The balanced CH is the same as the current CH, not rebalancing 14:17:30,394 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) Waiting on view 2 being accepted 14:17:30,394 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Updating local topology for cache test_cache: CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091]} 14:17:30,394 TRACE [org.infinispan.topology.CacheTopology] (testng-InfinispanNodeFailureTest) Current consistent hash's routing table: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 14:17:30,394 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (testng-InfinispanNodeFailureTest) Installing new cache topology CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091]} on cache test_cache 14:17:30,394 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (testng-InfinispanNodeFailureTest) This is the first topology 1 in which the local node is a member 14:17:30,396 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (testng-InfinispanNodeFailureTest) Received new topology for cache test_cache, isRebalance = false, isMember = true, topology = CacheTopology{id=1, rebalanceId=1, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091]} 14:17:30,396 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (testng-InfinispanNodeFailureTest) Signalling topology 1 is installed 14:17:30,396 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (testng-InfinispanNodeFailureTest) On cache test_cache we have: added segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] 14:17:30,396 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (testng-InfinispanNodeFailureTest) Adding inbound state transfer for segments [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] of cache test_cache 14:17:30,397 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (testng-InfinispanNodeFailureTest) Finished adding inbound state transfer for segments [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] of cache test_cache 14:17:30,397 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (testng-InfinispanNodeFailureTest) Topology update processed, stateTransferTopologyId = -1, startRebalance = false, pending CH = null 14:17:30,397 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (testng-InfinispanNodeFailureTest) Signalling transaction data received for topology 1 14:17:30,397 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 0 14:17:30,397 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,397 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (testng-InfinispanNodeFailureTest) Removing no longer owned entries for cache test_cache 14:17:30,398 DEBUG [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Topology changed, recalculating minTopologyId 14:17:30,398 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Changing minimum topology ID from -1 to 1 14:17:30,398 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (testng-InfinispanNodeFailureTest) Initial state transfer complete for cache test_cache on node InfinispanNodeFailureTest-NodeA-7443 14:17:30,398 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Updating stable topology for cache test_cache: CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091]} 14:17:30,398 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (testng-InfinispanNodeFailureTest) StateTransferManager of cache test_cache on node InfinispanNodeFailureTest-NodeA-7443 received initial topology CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091]} 14:17:30,398 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Attempting to execute command on self: CacheTopologyControlCommand{cache=test_cache, type=POLICY_GET_STATUS, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} 14:17:30,398 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (testng-InfinispanNodeFailureTest) Waiting for initial state transfer to finish for cache test_cache on InfinispanNodeFailureTest-NodeA-7443 14:17:30,398 DEBUG [org.infinispan.cache.impl.CacheImpl] (testng-InfinispanNodeFailureTest) Started cache test_cache on InfinispanNodeFailureTest-NodeA-7443 14:17:30,398 TRACE [org.infinispan.manager.DefaultCacheManager] (testng-InfinispanNodeFailureTest) Closing latch for cache test_cache 14:17:30,399 TRACE [org.infinispan.manager.DefaultCacheManager] (testng-InfinispanNodeFailureTest) About to wire and start cache test_cache 14:17:30,404 TRACE [org.infinispan.factories.InterceptorChainFactory] (testng-InfinispanNodeFailureTest) Finished building default interceptor chain. 14:17:30,406 DEBUG [org.infinispan.commons.util.ServiceFinder] (testng-InfinispanNodeFailureTest) No service impls found: FilterIndexingServiceProvider 14:17:30,407 DEBUG [org.infinispan.interceptors.impl.AsyncInterceptorChainImpl] (testng-InfinispanNodeFailureTest) Interceptor chain size: 11 14:17:30,407 DEBUG [org.infinispan.interceptors.impl.AsyncInterceptorChainImpl] (testng-InfinispanNodeFailureTest) Interceptor chain is: >> org.infinispan.interceptors.distribution.DistributionBulkInterceptor@545339f7 >> org.infinispan.interceptors.impl.InvocationContextInterceptor@6a1730b2 >> org.infinispan.interceptors.impl.CacheMgmtInterceptor@597a4bc1 >> org.infinispan.statetransfer.StateTransferInterceptor@4327922c >> org.infinispan.statetransfer.TransactionSynchronizerInterceptor@ef6873e >> org.infinispan.interceptors.impl.TxInterceptor@5db7b4f2 >> org.infinispan.interceptors.locking.PessimisticLockingInterceptor@5698c8c6 >> org.infinispan.interceptors.impl.NotificationInterceptor@295196cb >> org.infinispan.interceptors.impl.EntryWrappingInterceptor@13243ba8 >> org.infinispan.interceptors.distribution.TxDistributionInterceptor@73540d1a >> org.infinispan.interceptors.impl.CallInterceptor@639a254e 14:17:30,407 TRACE [org.infinispan.transaction.xa.TransactionFactory] (testng-InfinispanNodeFailureTest) Setting factory enum to NODLD_NORECOVERY_XA 14:17:30,407 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeHits [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=false,type=boolean] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictions [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReadTime [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hitRatio [r=true,w=false,is=false,type=double] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stores [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hits [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageRemoveTime [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfEntries [r=true,w=false,is=false,type=int] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageWriteTime [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute elapsedTime [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute readWriteRatio [r=true,w=false,is=false,type=double] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceReset [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceStart [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute misses [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeMisses [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeHits [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute activations [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute storeWrites [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceReset [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictions [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute misses [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeMisses [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageWriteTime [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageRemoveTime [r=true,w=false,is=false,type=long] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute readWriteRatio [r=true,w=false,is=false,type=double] 14:17:30,408 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute invalidations [r=true,w=false,is=false,type=long] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfEntries [r=true,w=false,is=false,type=int] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheLoaderMisses [r=true,w=false,is=false,type=long] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheLoaderLoads [r=true,w=false,is=false,type=long] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hits [r=true,w=false,is=false,type=long] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute passivations [r=true,w=false,is=false,type=long] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stores [r=true,w=false,is=false,type=long] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hitRatio [r=true,w=false,is=false,type=double] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksHeld [r=true,w=false,is=false,type=int] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceStart [r=true,w=false,is=false,type=long] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksAvailable [r=true,w=false,is=false,type=int] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReadTime [r=true,w=false,is=false,type=long] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void setStaleStatsTreshold 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute commits [r=true,w=false,is=false,type=long] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute prepares [r=true,w=false,is=false,type=long] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rollbacks [r=true,w=false,is=false,type=long] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute joinComplete [r=true,w=false,is=true,type=boolean] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rebalancingStatus [r=true,w=false,is=false,type=java.lang.String] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stateTransferInProgress [r=true,w=false,is=true,type=boolean] 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation boolean isAffectedByRehash 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.util.List locateKey 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation boolean isLocatedLocally 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation long synchronizeData 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void recordKnownGlobalKeyset 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation long synchronizeData 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void disconnectSource 14:17:30,409 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute version [r=true,w=false,is=false,type=java.lang.String] 14:17:30,410 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheAvailability [r=true,w=true,is=false,type=java.lang.String] 14:17:30,410 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheStatus [r=true,w=false,is=false,type=java.lang.String] 14:17:30,410 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheName [r=true,w=false,is=false,type=java.lang.String] 14:17:30,410 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute configurationAsProperties [r=true,w=false,is=false,type=java.util.Properties] 14:17:30,410 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rebalancingEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,410 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void clearOperation 14:17:30,410 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void shutdown 14:17:30,410 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void start 14:17:30,410 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void stop 14:17:30,410 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute passivations [r=true,w=false,is=false,type=long] 14:17:30,410 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,410 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictionSize [r=true,w=true,is=false,type=long] 14:17:30,410 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute activations [r=true,w=false,is=false,type=java.lang.String] 14:17:30,410 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=false,type=boolean] 14:17:30,410 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,410 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute version [r=true,w=false,is=false,type=java.lang.String] 14:17:30,410 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheAvailability [r=true,w=true,is=false,type=java.lang.String] 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheStatus [r=true,w=false,is=false,type=java.lang.String] 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheName [r=true,w=false,is=false,type=java.lang.String] 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute configurationAsProperties [r=true,w=false,is=false,type=java.util.Properties] 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rebalancingEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void clearOperation 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void shutdown 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void start 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void stop 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksAvailable [r=true,w=false,is=false,type=int] 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute concurrencyLevel [r=true,w=false,is=false,type=int] 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksHeld [r=true,w=false,is=false,type=int] 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,411 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute committedViewAsString [r=true,w=false,is=false,type=java.lang.String] 14:17:30,411 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute successRatioFloatingPoint [r=true,w=false,is=false,type=double] 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute pendingViewAsString [r=true,w=false,is=false,type=java.lang.String] 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute successRatio [r=true,w=false,is=false,type=java.lang.String] 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReplicationTime [r=true,w=false,is=false,type=long] 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute replicationFailures [r=true,w=false,is=false,type=long] 14:17:30,411 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Added a new task directly: 0 task(s) are waiting 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute replicationCount [r=true,w=false,is=false,type=long] 14:17:30,411 TRACE [org.jgroups.protocols.MFC] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeA-7443 used 510 credits, 1999482 remaining 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,411 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void setStatisticsEnabled 14:17:30,411 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,411 TRACE [org.jgroups.protocols.MFC] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeA-7443 used 510 credits, 1999482 remaining 14:17:30,412 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@150303aa under infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Statistics 14:17:30,412 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@76a189b6 under infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=ClusterCacheStats 14:17:30,412 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@3d1db5c9 under infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Transactions 14:17:30,412 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@2ac3a968 under infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=StateTransferManager 14:17:30,412 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@2750f1c9 under infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=DistributionManager 14:17:30,412 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@5dc620ac under infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=RollingUpgradeManager 14:17:30,412 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@d256520 under infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Cache 14:17:30,412 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@3a1a0505 under infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Passivation 14:17:30,412 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@1a647536 under infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Configuration 14:17:30,412 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@76cbf596 under infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Activation 14:17:30,412 DEBUG [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Object name infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Cache already registered 14:17:30,413 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@50e1920a under infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=LockManager 14:17:30,413 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@3ab9873e under infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=RpcManager 14:17:30,413 TRACE [org.infinispan.jmx.CacheJmxRegistration] (testng-InfinispanNodeFailureTest) ISPN000031: MBeans were successfully registered to the platform MBean server. 14:17:30,413 TRACE [org.infinispan.distribution.impl.DistributionManagerImpl] (testng-InfinispanNodeFailureTest) starting distribution manager on InfinispanNodeFailureTest-NodeB-62629 14:17:30,413 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (testng-InfinispanNodeFailureTest) Starting StateTransferManager of cache test_cache on node InfinispanNodeFailureTest-NodeB-62629 14:17:30,413 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Node InfinispanNodeFailureTest-NodeB-62629 joining cache test_cache 14:17:30,413 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) dests=[InfinispanNodeFailureTest-NodeA-7443], command=CacheTopologyControlCommand{cache=test_cache, type=JOIN, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=CacheJoinInfo{consistentHashFactory=org.infinispan.distribution.ch.impl.SyncReplicatedConsistentHashFactory@762e1ca8, hashFunction=MurmurHash3, numSegments=60, numOwners=2, timeout=240000, totalOrder=false, distributed=false, persistentUUID=a61656c9-925c-4276-a2fd-582b76ae2bf3, persistentStateChecksum=Optional.empty}, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2}, mode=SYNCHRONOUS, timeout=240000 14:17:30,413 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (testng-InfinispanNodeFailureTest) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=JOIN, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=CacheJoinInfo{consistentHashFactory=org.infinispan.distribution.ch.impl.SyncReplicatedConsistentHashFactory@762e1ca8, hashFunction=MurmurHash3, numSegments=60, numOwners=2, timeout=240000, totalOrder=false, distributed=false, persistentUUID=a61656c9-925c-4276-a2fd-582b76ae2bf3, persistentStateChecksum=Optional.empty}, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} to single recipient InfinispanNodeFailureTest-NodeA-7443 with response mode GET_ALL 14:17:30,414 TRACE [org.jgroups.blocks.RequestCorrelator] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: invoking unicast RPC [req-id=5] on InfinispanNodeFailureTest-NodeA-7443 14:17:30,414 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #7, conn_id=0) 14:17:30,414 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=5, rsp_expected=true, UNICAST3: DATA, seqno=7, TP: [cluster_name=ISPN] 14:17:30,414 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (167 bytes (835.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,414 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (170 bytes) 14:17:30,414 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=95 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=5, rsp_expected=true, UNICAST3: DATA, seqno=7, TP: [cluster_name=ISPN] 14:17:30,414 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #7, conn_id=0) 14:17:30,415 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#7 14:17:30,415 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 5 14:17:30,415 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=JOIN, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=CacheJoinInfo{consistentHashFactory=org.infinispan.distribution.ch.impl.SyncReplicatedConsistentHashFactory@67421807, hashFunction=MurmurHash3, numSegments=60, numOwners=2, timeout=240000, totalOrder=false, distributed=false, persistentUUID=a61656c9-925c-4276-a2fd-582b76ae2bf3, persistentStateChecksum=Optional.empty}, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeB-62629] 14:17:30,415 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,415 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Added joiner InfinispanNodeFailureTest-NodeB-62629 to cache test_cache with persistent uuid a61656c9-925c-4276-a2fd-582b76ae2bf3: members = [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], joiners = [InfinispanNodeFailureTest-NodeB-62629] 14:17:30,415 DEBUG [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Queueing rebalance for cache test_cache with members [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,415 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Rebalancing consistent hash for cache test_cache, members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,416 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Updating cache test_cache topology for rebalance: CacheTopology{id=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,416 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Cache test_cache topology updated: CacheTopology{id=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]}, members = [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], joiners = [InfinispanNodeFailureTest-NodeB-62629] 14:17:30,416 TRACE [org.infinispan.topology.CacheTopology] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Current consistent hash's routing table: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 14:17:30,416 TRACE [org.infinispan.topology.CacheTopology] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Pending consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,417 TRACE [org.infinispan.topology.RebalanceConfirmationCollector] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Initialized rebalance confirmation collector 2@test_cache, initial list is [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,417 INFO [org.infinispan.CLUSTER] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) ISPN000310: Starting cluster-wide rebalance for cache test_cache, topology CacheTopology{id=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,418 INFO [org.infinispan.CLUSTER] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) [Context=test_cache][Scope=InfinispanNodeFailureTest-NodeA-7443]ISPN100002: Started local rebalance 14:17:30,418 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) dests=null, command=CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_START, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2}, mode=ASYNCHRONOUS, timeout=240000 14:17:30,418 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Attempting to execute command on self: CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_START, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} 14:17:30,418 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_START, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} to addresses null with response mode GET_NONE 14:17:30,419 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#5 14:17:30,419 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=5], TP: [cluster_name=ISPN] 14:17:30,419 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (905 bytes (4525.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,419 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (908 bytes) 14:17:30,419 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (908 bytes) 14:17:30,419 TRACE [org.jgroups.protocols.MFC] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443 used 854 credits, 1998628 remaining 14:17:30,419 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=854 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=5], TP: [cluster_name=ISPN] 14:17:30,419 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#5 14:17:30,420 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#5 14:17:30,420 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,420 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=854 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=5], TP: [cluster_name=ISPN] 14:17:30,420 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received InfinispanNodeFailureTest-NodeA-7443#5 14:17:30,420 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#5 14:17:30,419 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) About to send back response SuccessfulResponse{responseValue=StatusResponse{cacheJoinInfo=null, cacheTopology=CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091]}, stableTopology=CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091]}}} for command CacheTopologyControlCommand{cache=test_cache, type=JOIN, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=CacheJoinInfo{consistentHashFactory=org.infinispan.distribution.ch.impl.SyncReplicatedConsistentHashFactory@67421807, hashFunction=MurmurHash3, numSegments=60, numOwners=2, timeout=240000, totalOrder=false, distributed=false, persistentUUID=a61656c9-925c-4276-a2fd-582b76ae2bf3, persistentStateChecksum=Optional.empty}, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} 14:17:30,420 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,420 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) sending rsp for 5 to InfinispanNodeFailureTest-NodeB-62629 14:17:30,420 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Waiting on view 2 being accepted 14:17:30,420 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeB-62629: #4, conn_id=1) 14:17:30,420 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Updating local topology for cache test_cache: CacheTopology{id=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,420 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeB-62629, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=5, rsp_expected=true, UNICAST3: DATA, seqno=4, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,420 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Starting local rebalance for cache test_cache, topology = CacheTopology{id=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,420 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Current consistent hash's routing table: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 14:17:30,420 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Pending consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,420 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (597 bytes (2985.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeB-62629] 14:17:30,421 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (600 bytes) 14:17:30,420 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Installing new cache topology CacheTopology{id=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 60, InfinispanNodeFailureTest-NodeB-62629: 0]}, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} on cache test_cache 14:17:30,421 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_START, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,421 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_START, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,421 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Received new topology for cache test_cache, isRebalance = true, isMember = true, topology = CacheTopology{id=2, rebalanceId=2, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 60, InfinispanNodeFailureTest-NodeB-62629: 0]}, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,421 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Start keeping track of keys for rebalance 14:17:30,421 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,421 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,421 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Added a new task directly: 0 task(s) are waiting 14:17:30,421 TRACE [org.jgroups.protocols.MFC] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeA-7443 used 854 credits, 1998628 remaining 14:17:30,421 TRACE [org.jgroups.protocols.MFC] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeA-7443 used 854 credits, 1998628 remaining 14:17:30,421 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Tracking is disabled. Clear tracker: {} 14:17:30,421 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Set track to PUT_FOR_STATE_TRANSFER = true 14:17:30,421 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Signalling topology 2 is installed 14:17:30,421 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: InfinispanNodeFailureTest-NodeB-62629, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=525 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=5, rsp_expected=true, UNICAST3: DATA, seqno=4, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,421 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #4, conn_id=1) 14:17:30,421 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#4 14:17:30,421 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) On cache test_cache we have: new segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; old segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] 14:17:30,422 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) On cache test_cache we have: added segments: []; removed segments: [] 14:17:30,422 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Topology update processed, stateTransferTopologyId = 2, startRebalance = true, pending CH = PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]} 14:17:30,422 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Signalling transaction data received for topology 2 14:17:30,422 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Finished receiving of segments for cache test_cache for topology 2. 14:17:30,422 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Stop keeping track of changed keys for state transfer 14:17:30,422 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t4) Ignoring rebalance 2 for cache test_cache that doesn't exist locally 14:17:30,422 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,422 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Tracking is disabled. Clear tracker: {} 14:17:30,422 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Responses: sender=InfinispanNodeFailureTest-NodeA-7443value=SuccessfulResponse{responseValue=StatusResponse{cacheJoinInfo=null, cacheTopology=CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091]}, stableTopology=CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091]}}} , received=true, suspected=false 14:17:30,422 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Attempting to execute command on self: CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_CONFIRM, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=2, rebalanceId=2, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} 14:17:30,422 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 0 14:17:30,422 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,422 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Removing no longer owned entries for cache test_cache 14:17:30,422 DEBUG [org.infinispan.CLUSTER] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) ISPN000328: Finished local rebalance for cache test_cache on node InfinispanNodeFailureTest-NodeA-7443, topology id = 2 14:17:30,422 DEBUG [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Topology changed, recalculating minTopologyId 14:17:30,422 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Changing minimum topology ID from 1 to 2 14:17:30,422 INFO [org.infinispan.CLUSTER] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) [Context=test_cache][Scope=InfinispanNodeFailureTest-NodeA-7443]ISPN100003: Finished local rebalance 14:17:30,422 TRACE [org.infinispan.topology.RebalanceConfirmationCollector] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Rebalance confirmation collector 2@test_cache received confirmation for InfinispanNodeFailureTest-NodeA-7443, remaining list is [InfinispanNodeFailureTest-NodeB-62629] 14:17:30,423 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) Waiting on view 2 being accepted 14:17:30,423 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Updating local topology for cache test_cache: CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091]} 14:17:30,423 TRACE [org.infinispan.topology.CacheTopology] (testng-InfinispanNodeFailureTest) Current consistent hash's routing table: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 14:17:30,423 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (testng-InfinispanNodeFailureTest) Installing new cache topology CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091]} on cache test_cache 14:17:30,423 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (testng-InfinispanNodeFailureTest) Received new topology for cache test_cache, isRebalance = false, isMember = false, topology = CacheTopology{id=1, rebalanceId=1, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091]} 14:17:30,423 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (testng-InfinispanNodeFailureTest) Signalling topology 1 is installed 14:17:30,423 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (testng-InfinispanNodeFailureTest) On cache test_cache we have: added segments: [] 14:17:30,423 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (testng-InfinispanNodeFailureTest) Topology update processed, stateTransferTopologyId = -1, startRebalance = false, pending CH = null 14:17:30,423 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (testng-InfinispanNodeFailureTest) Signalling transaction data received for topology 1 14:17:30,423 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 0 14:17:30,423 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,423 DEBUG [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Topology changed, recalculating minTopologyId 14:17:30,423 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Changing minimum topology ID from -1 to 1 14:17:30,423 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Updating stable topology for cache test_cache: CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091]} 14:17:30,423 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (testng-InfinispanNodeFailureTest) StateTransferManager of cache test_cache on node InfinispanNodeFailureTest-NodeB-62629 received initial topology CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091]} 14:17:30,423 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Waiting on view 2 being accepted 14:17:30,423 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) dests=[InfinispanNodeFailureTest-NodeA-7443], command=CacheTopologyControlCommand{cache=test_cache, type=POLICY_GET_STATUS, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2}, mode=SYNCHRONOUS, timeout=240000 14:17:30,423 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Updating local topology for cache test_cache: CacheTopology{id=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,423 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (testng-InfinispanNodeFailureTest) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=POLICY_GET_STATUS, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} to single recipient InfinispanNodeFailureTest-NodeA-7443 with response mode GET_ALL 14:17:30,423 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Starting local rebalance for cache test_cache, topology = CacheTopology{id=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,424 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Current consistent hash's routing table: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 14:17:30,424 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Pending consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,424 TRACE [org.jgroups.blocks.RequestCorrelator] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: invoking unicast RPC [req-id=6] on InfinispanNodeFailureTest-NodeA-7443 14:17:30,424 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Installing new cache topology CacheTopology{id=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 60, InfinispanNodeFailureTest-NodeB-62629: 0]}, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} on cache test_cache 14:17:30,424 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #8, conn_id=0) 14:17:30,424 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) This is the first topology 2 in which the local node is a member 14:17:30,424 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=6, rsp_expected=true, UNICAST3: DATA, seqno=8, TP: [cluster_name=ISPN] 14:17:30,424 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Received new topology for cache test_cache, isRebalance = true, isMember = true, topology = CacheTopology{id=2, rebalanceId=2, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]}, pendingCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 60, InfinispanNodeFailureTest-NodeB-62629: 0]}, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,424 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Start keeping track of keys for rebalance 14:17:30,424 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,424 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Tracking is disabled. Clear tracker: {} 14:17:30,424 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Set track to PUT_FOR_STATE_TRANSFER = true 14:17:30,424 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (94 bytes (470.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,424 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Signalling topology 2 is installed 14:17:30,424 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (97 bytes) 14:17:30,424 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) On cache test_cache we have: new segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; old segments: [] 14:17:30,424 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) On cache test_cache we have: added segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; removed segments: [] 14:17:30,424 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Adding inbound state transfer for segments [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] of cache test_cache 14:17:30,424 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Requesting transactions for segments [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] of cache test_cache from node InfinispanNodeFailureTest-NodeA-7443 14:17:30,424 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=22 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=6, rsp_expected=true, UNICAST3: DATA, seqno=8, TP: [cluster_name=ISPN] 14:17:30,425 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #8, conn_id=0) 14:17:30,425 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#8 14:17:30,425 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 6 14:17:30,425 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=POLICY_GET_STATUS, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=0} [sender=InfinispanNodeFailureTest-NodeB-62629] 14:17:30,425 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,425 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) About to send back response SuccessfulResponse{responseValue=true} for command CacheTopologyControlCommand{cache=test_cache, type=POLICY_GET_STATUS, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=0} 14:17:30,425 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) sending rsp for 6 to InfinispanNodeFailureTest-NodeB-62629 14:17:30,425 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeB-62629: #5, conn_id=1) 14:17:30,425 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeB-62629, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=6, rsp_expected=true, UNICAST3: DATA, seqno=5, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,425 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (77 bytes (385.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeB-62629] 14:17:30,425 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) InfinispanNodeFailureTest-NodeB-62629 invoking StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeB-62629, type=GET_TRANSACTIONS, topologyId=2, segments=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]} to recipient list [InfinispanNodeFailureTest-NodeA-7443] with options RpcOptions{timeout=240000, unit=MILLISECONDS, deliverOrder=NONE, responseFilter=null, responseMode=SYNCHRONOUS} 14:17:30,425 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (80 bytes) 14:17:30,425 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) dests=[InfinispanNodeFailureTest-NodeA-7443], command=StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeB-62629, type=GET_TRANSACTIONS, topologyId=2, segments=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]}, mode=SYNCHRONOUS, timeout=240000 14:17:30,425 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Replication task sending StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeB-62629, type=GET_TRANSACTIONS, topologyId=2, segments=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]} to single recipient InfinispanNodeFailureTest-NodeA-7443 with response mode GET_ALL 14:17:30,426 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: InfinispanNodeFailureTest-NodeB-62629, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=5 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=6, rsp_expected=true, UNICAST3: DATA, seqno=5, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,426 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #5, conn_id=1) 14:17:30,426 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#5 14:17:30,426 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Responses: sender=InfinispanNodeFailureTest-NodeA-7443value=SuccessfulResponse{responseValue=true} , received=true, suspected=false 14:17:30,426 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (testng-InfinispanNodeFailureTest) Waiting for initial state transfer to finish for cache test_cache on InfinispanNodeFailureTest-NodeB-62629 14:17:30,426 TRACE [org.jgroups.blocks.RequestCorrelator] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) InfinispanNodeFailureTest-NodeB-62629: invoking unicast RPC [req-id=7] on InfinispanNodeFailureTest-NodeA-7443 14:17:30,426 TRACE [org.jgroups.protocols.UNICAST3] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #9, conn_id=0) 14:17:30,426 TRACE [org.jgroups.protocols.TCP_NIO2] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=7, rsp_expected=true, UNICAST3: DATA, seqno=9, TP: [cluster_name=ISPN] 14:17:30,426 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (416 bytes (2080.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,426 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (419 bytes) 14:17:30,427 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=344 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=7, rsp_expected=true, UNICAST3: DATA, seqno=9, TP: [cluster_name=ISPN] 14:17:30,427 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #9, conn_id=0) 14:17:30,427 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#9 14:17:30,427 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 7 14:17:30,428 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) Attempting to execute CacheRpcCommand: StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeB-62629, type=GET_TRANSACTIONS, topologyId=2, segments=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]} [sender=InfinispanNodeFailureTest-NodeB-62629] 14:17:30,429 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,429 TRACE [org.infinispan.remoting.inboundhandler.NonTotalOrderTxPerCacheInboundInvocationHandler] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Calling perform() on StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeB-62629, type=GET_TRANSACTIONS, topologyId=2, segments=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]} 14:17:30,429 TRACE [org.infinispan.statetransfer.StateProviderImpl] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Received request for transactions from node InfinispanNodeFailureTest-NodeB-62629 for segments [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] of cache test_cache with topology id 2 14:17:30,430 TRACE [org.infinispan.statetransfer.StateProviderImpl] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Found 0 transaction(s) to transfer 14:17:30,431 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) About to send back response SuccessfulResponse{responseValue=[]} for command StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeB-62629, type=GET_TRANSACTIONS, topologyId=2, segments=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]} 14:17:30,431 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) sending rsp for 7 to InfinispanNodeFailureTest-NodeB-62629 14:17:30,431 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeB-62629: #6, conn_id=1) 14:17:30,431 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeB-62629, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=7, rsp_expected=true, UNICAST3: DATA, seqno=6, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,431 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (80 bytes (400.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeB-62629] 14:17:30,431 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (83 bytes) 14:17:30,431 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: InfinispanNodeFailureTest-NodeB-62629, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=8 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=7, rsp_expected=true, UNICAST3: DATA, seqno=6, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,431 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #6, conn_id=1) 14:17:30,431 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#6 14:17:30,432 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Responses: sender=InfinispanNodeFailureTest-NodeA-7443value=SuccessfulResponse{responseValue=[]} , received=true, suspected=false 14:17:30,432 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Response(s) to StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeB-62629, type=GET_TRANSACTIONS, topologyId=2, segments=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]} is {InfinispanNodeFailureTest-NodeA-7443=SuccessfulResponse{responseValue=[]} } 14:17:30,432 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Applying 0 transactions for cache test_cache transferred from node InfinispanNodeFailureTest-NodeA-7443 14:17:30,432 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Finished adding inbound state transfer for segments [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] of cache test_cache 14:17:30,432 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Topology update processed, stateTransferTopologyId = 2, startRebalance = true, pending CH = PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]} 14:17:30,432 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Signalling transaction data received for topology 2 14:17:30,432 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Finished receiving of segments for cache test_cache for topology 2. 14:17:30,432 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Stop keeping track of changed keys for state transfer 14:17:30,432 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,432 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Tracking is disabled. Clear tracker: {} 14:17:30,432 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) dests=[InfinispanNodeFailureTest-NodeA-7443], command=CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_CONFIRM, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=null, topologyId=2, rebalanceId=2, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2}, mode=ASYNCHRONOUS, timeout=0 14:17:30,432 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_CONFIRM, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=null, topologyId=2, rebalanceId=2, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} to single recipient InfinispanNodeFailureTest-NodeA-7443 with response mode GET_NONE 14:17:30,432 TRACE [org.jgroups.protocols.UNICAST3] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #10, conn_id=0) 14:17:30,432 TRACE [org.jgroups.protocols.TCP_NIO2] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, UNICAST3: DATA, seqno=10, TP: [cluster_name=ISPN] 14:17:30,432 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (122 bytes (610.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,432 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (125 bytes) 14:17:30,433 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 0 14:17:30,433 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,433 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=51 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, UNICAST3: DATA, seqno=10, TP: [cluster_name=ISPN] 14:17:30,433 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Removing no longer owned entries for cache test_cache 14:17:30,433 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #10, conn_id=0) 14:17:30,433 DEBUG [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Topology changed, recalculating minTopologyId 14:17:30,433 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#10 14:17:30,433 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Changing minimum topology ID from 1 to 2 14:17:30,433 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,433 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_CONFIRM, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=null, topologyId=2, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeB-62629] 14:17:30,433 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,433 DEBUG [org.infinispan.CLUSTER] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) ISPN000328: Finished local rebalance for cache test_cache on node InfinispanNodeFailureTest-NodeB-62629, topology id = 2 14:17:30,433 INFO [org.infinispan.CLUSTER] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) [Context=test_cache][Scope=InfinispanNodeFailureTest-NodeB-62629]ISPN100003: Finished local rebalance 14:17:30,433 TRACE [org.infinispan.topology.RebalanceConfirmationCollector] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Rebalance confirmation collector 2@test_cache received confirmation for InfinispanNodeFailureTest-NodeB-62629, remaining list is [] 14:17:30,433 INFO [org.infinispan.CLUSTER] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) ISPN000336: Finished cluster-wide rebalance for cache test_cache, topology id = 2 14:17:30,433 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Cache test_cache topology updated: CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]}, members = [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], joiners = [] 14:17:30,433 TRACE [org.infinispan.topology.CacheTopology] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Current consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,433 DEBUG [org.infinispan.topology.ClusterTopologyManagerImpl] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Updating cluster-wide current topology for cache test_cache, topology = CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]}, availability mode = AVAILABLE 14:17:30,433 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) dests=null, command=CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2}, mode=ASYNCHRONOUS, timeout=240000 14:17:30,433 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} to addresses null with response mode GET_NONE 14:17:30,433 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Attempting to execute command on self: CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} 14:17:30,434 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#6 14:17:30,434 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=6], TP: [cluster_name=ISPN] 14:17:30,434 TRACE [org.jgroups.protocols.MFC] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443 used 573 credits, 1998055 remaining 14:17:30,434 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Updating stable topology for cache test_cache: CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,434 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (624 bytes (3120.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,434 DEBUG [org.infinispan.topology.ClusterTopologyManagerImpl] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Updating cluster-wide stable topology for cache test_cache, topology = CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,434 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (627 bytes) 14:17:30,434 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) dests=null, command=CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2}, mode=ASYNCHRONOUS, timeout=240000 14:17:30,434 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t5) Attempting to execute command on self: CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} 14:17:30,434 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} to addresses null with response mode GET_NONE 14:17:30,434 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (627 bytes) 14:17:30,434 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t5) Updating stable topology for cache test_cache: CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,434 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#7 14:17:30,435 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=7], TP: [cluster_name=ISPN] 14:17:30,435 TRACE [org.jgroups.protocols.MFC] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443 used 572 credits, 1997483 remaining 14:17:30,435 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=573 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=6], TP: [cluster_name=ISPN] 14:17:30,435 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=573 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=6], TP: [cluster_name=ISPN] 14:17:30,435 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received InfinispanNodeFailureTest-NodeA-7443#6 14:17:30,435 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#6 14:17:30,435 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#6 14:17:30,435 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#6 14:17:30,435 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,435 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,435 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (623 bytes (3115.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,435 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (626 bytes) 14:17:30,435 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=572 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=7], TP: [cluster_name=ISPN] 14:17:30,435 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#7 14:17:30,435 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#7 14:17:30,435 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,436 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,436 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,436 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,436 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,436 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,436 TRACE [org.jgroups.protocols.MFC] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeA-7443 used 573 credits, 1998055 remaining 14:17:30,436 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Waiting on view 2 being accepted 14:17:30,435 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (626 bytes) 14:17:30,436 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Updating local topology for cache test_cache: CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,436 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Current consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,436 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Installing new cache topology CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} on cache test_cache 14:17:30,436 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Received new topology for cache test_cache, isRebalance = false, isMember = true, topology = CacheTopology{id=3, rebalanceId=2, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,436 TRACE [org.jgroups.protocols.MFC] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeA-7443 used 572 credits, 1998056 remaining 14:17:30,436 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Signalling topology 3 is installed 14:17:30,436 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Added a new task directly: 0 task(s) are waiting 14:17:30,436 TRACE [org.jgroups.protocols.MFC] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeA-7443 used 573 credits, 1998055 remaining 14:17:30,436 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) On cache test_cache we have: new segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; old segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] 14:17:30,436 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) On cache test_cache we have: added segments: []; removed segments: [] 14:17:30,436 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Topology update processed, stateTransferTopologyId = 2, startRebalance = false, pending CH = null 14:17:30,436 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Stop keeping track of changed keys for state transfer 14:17:30,436 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,436 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Tracking is disabled. Clear tracker: {} 14:17:30,437 TRACE [org.infinispan.stream.impl.LocalStreamManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Data rehash occurred startHash: PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]} and endHash: PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]} with new topology 3 and was pre false 14:17:30,437 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=572 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=7], TP: [cluster_name=ISPN] 14:17:30,437 TRACE [org.infinispan.stream.impl.LocalStreamManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) No change listeners present! 14:17:30,437 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Ignoring consistent hash update 3 for cache test_cache that doesn't exist locally 14:17:30,437 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Updating stable topology for cache test_cache: CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,437 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received InfinispanNodeFailureTest-NodeA-7443#7 14:17:30,437 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Unlock State Transfer in Progress for topology ID 3 14:17:30,437 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Waiting on view 2 being accepted 14:17:30,437 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#7 14:17:30,437 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,437 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Updating local topology for cache test_cache: CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,437 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Current consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,437 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Signalling transaction data received for topology 3 14:17:30,437 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Installing new cache topology CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} on cache test_cache 14:17:30,437 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 0 14:17:30,437 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,437 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Received new topology for cache test_cache, isRebalance = false, isMember = true, topology = CacheTopology{id=3, rebalanceId=2, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,437 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Removing no longer owned entries for cache test_cache 14:17:30,437 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Signalling topology 3 is installed 14:17:30,437 DEBUG [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Topology changed, recalculating minTopologyId 14:17:30,437 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Changing minimum topology ID from 2 to 3 14:17:30,437 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) On cache test_cache we have: new segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; old segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] 14:17:30,437 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) On cache test_cache we have: added segments: []; removed segments: [] 14:17:30,437 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,437 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Topology update processed, stateTransferTopologyId = 2, startRebalance = false, pending CH = null 14:17:30,437 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Stop keeping track of changed keys for state transfer 14:17:30,437 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,437 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Tracking is disabled. Clear tracker: {} 14:17:30,437 TRACE [org.infinispan.stream.impl.LocalStreamManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Data rehash occurred startHash: PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeA-7443: 60]} and endHash: PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]} with new topology 3 and was pre false 14:17:30,437 TRACE [org.infinispan.stream.impl.LocalStreamManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) No change listeners present! 14:17:30,437 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Added a new task directly: 0 task(s) are waiting 14:17:30,437 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Unlock State Transfer in Progress for topology ID 3 14:17:30,437 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Signalling transaction data received for topology 3 14:17:30,437 TRACE [org.jgroups.protocols.MFC] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeA-7443 used 572 credits, 1997483 remaining 14:17:30,437 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 0 14:17:30,437 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,437 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Removing no longer owned entries for cache test_cache 14:17:30,437 DEBUG [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Topology changed, recalculating minTopologyId 14:17:30,437 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Changing minimum topology ID from 2 to 3 14:17:30,437 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Initial state transfer complete for cache test_cache on node InfinispanNodeFailureTest-NodeB-62629 14:17:30,438 DEBUG [org.infinispan.cache.impl.CacheImpl] (testng-InfinispanNodeFailureTest) Started cache test_cache on InfinispanNodeFailureTest-NodeB-62629 14:17:30,438 TRACE [org.infinispan.manager.DefaultCacheManager] (testng-InfinispanNodeFailureTest) Closing latch for cache test_cache 14:17:30,438 TRACE [org.infinispan.manager.DefaultCacheManager] (testng-InfinispanNodeFailureTest) About to wire and start cache test_cache 14:17:30,441 TRACE [org.infinispan.factories.InterceptorChainFactory] (testng-InfinispanNodeFailureTest) Finished building default interceptor chain. 14:17:30,443 DEBUG [org.infinispan.commons.util.ServiceFinder] (testng-InfinispanNodeFailureTest) No service impls found: FilterIndexingServiceProvider 14:17:30,443 DEBUG [org.infinispan.interceptors.impl.AsyncInterceptorChainImpl] (testng-InfinispanNodeFailureTest) Interceptor chain size: 11 14:17:30,443 DEBUG [org.infinispan.interceptors.impl.AsyncInterceptorChainImpl] (testng-InfinispanNodeFailureTest) Interceptor chain is: >> org.infinispan.interceptors.distribution.DistributionBulkInterceptor@69637a98 >> org.infinispan.interceptors.impl.InvocationContextInterceptor@734bd7fa >> org.infinispan.interceptors.impl.CacheMgmtInterceptor@6d5434a6 >> org.infinispan.statetransfer.StateTransferInterceptor@59810a22 >> org.infinispan.statetransfer.TransactionSynchronizerInterceptor@2bc41c30 >> org.infinispan.interceptors.impl.TxInterceptor@61deab5 >> org.infinispan.interceptors.locking.PessimisticLockingInterceptor@4aa01fd4 >> org.infinispan.interceptors.impl.NotificationInterceptor@1ba1dcb8 >> org.infinispan.interceptors.impl.EntryWrappingInterceptor@108f7660 >> org.infinispan.interceptors.distribution.TxDistributionInterceptor@792e6e86 >> org.infinispan.interceptors.impl.CallInterceptor@333a80c8 14:17:30,444 TRACE [org.infinispan.transaction.xa.TransactionFactory] (testng-InfinispanNodeFailureTest) Setting factory enum to NODLD_NORECOVERY_XA 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksHeld [r=true,w=false,is=false,type=int] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksAvailable [r=true,w=false,is=false,type=int] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute concurrencyLevel [r=true,w=false,is=false,type=int] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute passivations [r=true,w=false,is=false,type=long] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksAvailable [r=true,w=false,is=false,type=int] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheLoaderMisses [r=true,w=false,is=false,type=long] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfEntries [r=true,w=false,is=false,type=int] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageWriteTime [r=true,w=false,is=false,type=long] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute storeWrites [r=true,w=false,is=false,type=long] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReadTime [r=true,w=false,is=false,type=long] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeMisses [r=true,w=false,is=false,type=long] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute misses [r=true,w=false,is=false,type=long] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hits [r=true,w=false,is=false,type=long] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksHeld [r=true,w=false,is=false,type=int] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute readWriteRatio [r=true,w=false,is=false,type=double] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute activations [r=true,w=false,is=false,type=long] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stores [r=true,w=false,is=false,type=long] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheLoaderLoads [r=true,w=false,is=false,type=long] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute invalidations [r=true,w=false,is=false,type=long] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute passivations [r=true,w=false,is=false,type=long] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictions [r=true,w=false,is=false,type=long] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceStart [r=true,w=false,is=false,type=long] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hitRatio [r=true,w=false,is=false,type=double] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageRemoveTime [r=true,w=false,is=false,type=long] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceReset [r=true,w=false,is=false,type=long] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeHits [r=true,w=false,is=false,type=long] 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,444 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void setStaleStatsTreshold 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute version [r=true,w=false,is=false,type=java.lang.String] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheName [r=true,w=false,is=false,type=java.lang.String] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheStatus [r=true,w=false,is=false,type=java.lang.String] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rebalancingEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheAvailability [r=true,w=true,is=false,type=java.lang.String] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute configurationAsProperties [r=true,w=false,is=false,type=java.util.Properties] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void shutdown 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void stop 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void start 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void clearOperation 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute version [r=true,w=false,is=false,type=java.lang.String] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheName [r=true,w=false,is=false,type=java.lang.String] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheStatus [r=true,w=false,is=false,type=java.lang.String] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rebalancingEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheAvailability [r=true,w=true,is=false,type=java.lang.String] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute configurationAsProperties [r=true,w=false,is=false,type=java.util.Properties] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void shutdown 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void stop 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void start 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void clearOperation 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictionSize [r=true,w=true,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute activations [r=true,w=false,is=false,type=java.lang.String] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=false,type=boolean] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void recordKnownGlobalKeyset 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation long synchronizeData 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation long synchronizeData 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void disconnectSource 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rollbacks [r=true,w=false,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute prepares [r=true,w=false,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute commits [r=true,w=false,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute replicationFailures [r=true,w=false,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute pendingViewAsString [r=true,w=false,is=false,type=java.lang.String] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute successRatio [r=true,w=false,is=false,type=java.lang.String] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReplicationTime [r=true,w=false,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute committedViewAsString [r=true,w=false,is=false,type=java.lang.String] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute successRatioFloatingPoint [r=true,w=false,is=false,type=double] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute replicationCount [r=true,w=false,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void setStatisticsEnabled 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceReset [r=true,w=false,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute readWriteRatio [r=true,w=false,is=false,type=double] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeHits [r=true,w=false,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictions [r=true,w=false,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceStart [r=true,w=false,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute misses [r=true,w=false,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeMisses [r=true,w=false,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stores [r=true,w=false,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfEntries [r=true,w=false,is=false,type=int] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute elapsedTime [r=true,w=false,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hitRatio [r=true,w=false,is=false,type=double] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageWriteTime [r=true,w=false,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReadTime [r=true,w=false,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=false,type=boolean] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hits [r=true,w=false,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageRemoveTime [r=true,w=false,is=false,type=long] 14:17:30,445 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,446 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation boolean isLocatedLocally 14:17:30,446 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.util.List locateKey 14:17:30,446 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation boolean isAffectedByRehash 14:17:30,446 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stateTransferInProgress [r=true,w=false,is=true,type=boolean] 14:17:30,446 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute joinComplete [r=true,w=false,is=true,type=boolean] 14:17:30,446 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rebalancingStatus [r=true,w=false,is=false,type=java.lang.String] 14:17:30,446 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@7f645ef0 under infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=LockManager 14:17:30,446 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@3ff15047 under infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Passivation 14:17:30,446 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@18f8ff4c under infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=ClusterCacheStats 14:17:30,446 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@4267ac09 under infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Cache 14:17:30,446 DEBUG [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Object name infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Cache already registered 14:17:30,446 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@1fa9f96b under infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Configuration 14:17:30,446 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@2aeff1b1 under infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Activation 14:17:30,446 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@7532793f under infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=RollingUpgradeManager 14:17:30,446 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@aca9474 under infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Transactions 14:17:30,446 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@5a7846e7 under infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=RpcManager 14:17:30,446 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@fcf5660 under infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Statistics 14:17:30,446 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@69d23d27 under infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=DistributionManager 14:17:30,446 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Registered org.infinispan.jmx.ResourceDMBean@37656d5d under infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=StateTransferManager 14:17:30,446 TRACE [org.infinispan.jmx.CacheJmxRegistration] (testng-InfinispanNodeFailureTest) ISPN000031: MBeans were successfully registered to the platform MBean server. 14:17:30,446 TRACE [org.infinispan.distribution.impl.DistributionManagerImpl] (testng-InfinispanNodeFailureTest) starting distribution manager on InfinispanNodeFailureTest-NodeC-7981 14:17:30,447 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (testng-InfinispanNodeFailureTest) Starting StateTransferManager of cache test_cache on node InfinispanNodeFailureTest-NodeC-7981 14:17:30,447 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Node InfinispanNodeFailureTest-NodeC-7981 joining cache test_cache 14:17:30,451 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) dests=[InfinispanNodeFailureTest-NodeA-7443], command=CacheTopologyControlCommand{cache=test_cache, type=JOIN, sender=InfinispanNodeFailureTest-NodeC-7981, joinInfo=CacheJoinInfo{consistentHashFactory=org.infinispan.distribution.ch.impl.SyncReplicatedConsistentHashFactory@22f8408d, hashFunction=MurmurHash3, numSegments=60, numOwners=2, timeout=240000, totalOrder=false, distributed=false, persistentUUID=4e681186-1323-4fd9-bb28-fee04bf0f3a5, persistentStateChecksum=Optional.empty}, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2}, mode=SYNCHRONOUS, timeout=240000 14:17:30,452 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (testng-InfinispanNodeFailureTest) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=JOIN, sender=InfinispanNodeFailureTest-NodeC-7981, joinInfo=CacheJoinInfo{consistentHashFactory=org.infinispan.distribution.ch.impl.SyncReplicatedConsistentHashFactory@22f8408d, hashFunction=MurmurHash3, numSegments=60, numOwners=2, timeout=240000, totalOrder=false, distributed=false, persistentUUID=4e681186-1323-4fd9-bb28-fee04bf0f3a5, persistentStateChecksum=Optional.empty}, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} to single recipient InfinispanNodeFailureTest-NodeA-7443 with response mode GET_ALL 14:17:30,452 TRACE [org.jgroups.blocks.RequestCorrelator] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: invoking unicast RPC [req-id=8] on InfinispanNodeFailureTest-NodeA-7443 14:17:30,452 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #5, conn_id=0) 14:17:30,452 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeC-7981, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=8, rsp_expected=true, UNICAST3: DATA, seqno=5, TP: [cluster_name=ISPN] 14:17:30,452 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: sending 1 msgs (167 bytes (835.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,452 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) dest=127.0.0.1:7900 (170 bytes) 14:17:30,453 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeC-7981 (3 headers), size=95 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=8, rsp_expected=true, UNICAST3: DATA, seqno=5, TP: [cluster_name=ISPN] 14:17:30,453 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeC-7981: #5, conn_id=0) 14:17:30,453 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeC-7981#5 14:17:30,453 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 8 14:17:30,453 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=JOIN, sender=InfinispanNodeFailureTest-NodeC-7981, joinInfo=CacheJoinInfo{consistentHashFactory=org.infinispan.distribution.ch.impl.SyncReplicatedConsistentHashFactory@b83deee, hashFunction=MurmurHash3, numSegments=60, numOwners=2, timeout=240000, totalOrder=false, distributed=false, persistentUUID=4e681186-1323-4fd9-bb28-fee04bf0f3a5, persistentStateChecksum=Optional.empty}, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeC-7981] 14:17:30,453 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,453 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Added joiner InfinispanNodeFailureTest-NodeC-7981 to cache test_cache with persistent uuid 4e681186-1323-4fd9-bb28-fee04bf0f3a5: members = [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], joiners = [InfinispanNodeFailureTest-NodeC-7981] 14:17:30,453 DEBUG [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Queueing rebalance for cache test_cache with members [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,453 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Rebalancing consistent hash for cache test_cache, members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,454 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Updating cache test_cache topology for rebalance: CacheTopology{id=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,454 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Cache test_cache topology updated: CacheTopology{id=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]}, members = [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], joiners = [InfinispanNodeFailureTest-NodeC-7981] 14:17:30,454 TRACE [org.infinispan.topology.CacheTopology] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Current consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,454 TRACE [org.infinispan.topology.CacheTopology] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Pending consistent hash's routing table: [0, 0, 2, 1, 2, 1, 1, 2, 2, 1, 2, 0, 0, 2, 2, 1, 2, 1, 1, 1, 0, 1, 0, 2, 1, 1, 0, 0, 0, 2, 2, 2, 2, 0, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 0, 2, 2, 1, 1, 2, 0, 0, 0, 0, 0, 2, 2, 0, 2, 0] 14:17:30,454 TRACE [org.infinispan.topology.RebalanceConfirmationCollector] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Initialized rebalance confirmation collector 4@test_cache, initial list is [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,454 INFO [org.infinispan.CLUSTER] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) ISPN000310: Starting cluster-wide rebalance for cache test_cache, topology CacheTopology{id=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,454 INFO [org.infinispan.CLUSTER] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) [Context=test_cache][Scope=InfinispanNodeFailureTest-NodeA-7443]ISPN100002: Started local rebalance 14:17:30,454 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) dests=null, command=CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_START, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], throwable=null, viewId=2}, mode=ASYNCHRONOUS, timeout=240000 14:17:30,454 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Attempting to execute command on self: CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_START, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], throwable=null, viewId=2} 14:17:30,455 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_START, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], throwable=null, viewId=2} to addresses null with response mode GET_NONE 14:17:30,455 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Waiting on view 2 being accepted 14:17:30,455 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Updating local topology for cache test_cache: CacheTopology{id=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,455 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Starting local rebalance for cache test_cache, topology = CacheTopology{id=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,455 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Current consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,455 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Pending consistent hash's routing table: [0, 0, 2, 1, 2, 1, 1, 2, 2, 1, 2, 0, 0, 2, 2, 1, 2, 1, 1, 1, 0, 1, 0, 2, 1, 1, 0, 0, 0, 2, 2, 2, 2, 0, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 0, 2, 2, 1, 1, 2, 0, 0, 0, 0, 0, 2, 2, 0, 2, 0] 14:17:30,455 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Installing new cache topology CacheTopology{id=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, unionCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31, InfinispanNodeFailureTest-NodeC-7981: 0]}, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} on cache test_cache 14:17:30,455 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#8 14:17:30,455 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=8], TP: [cluster_name=ISPN] 14:17:30,455 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Received new topology for cache test_cache, isRebalance = true, isMember = true, topology = CacheTopology{id=4, rebalanceId=3, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, unionCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31, InfinispanNodeFailureTest-NodeC-7981: 0]}, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,455 TRACE [org.jgroups.protocols.MFC] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443 used 940 credits, 1996543 remaining 14:17:30,455 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Start keeping track of keys for rebalance 14:17:30,455 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,455 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Tracking is disabled. Clear tracker: {} 14:17:30,455 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Set track to PUT_FOR_STATE_TRANSFER = true 14:17:30,455 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) About to send back response SuccessfulResponse{responseValue=StatusResponse{cacheJoinInfo=null, cacheTopology=CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]}, stableTopology=CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]}}} for command CacheTopologyControlCommand{cache=test_cache, type=JOIN, sender=InfinispanNodeFailureTest-NodeC-7981, joinInfo=CacheJoinInfo{consistentHashFactory=org.infinispan.distribution.ch.impl.SyncReplicatedConsistentHashFactory@b83deee, hashFunction=MurmurHash3, numSegments=60, numOwners=2, timeout=240000, totalOrder=false, distributed=false, persistentUUID=4e681186-1323-4fd9-bb28-fee04bf0f3a5, persistentStateChecksum=Optional.empty}, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} 14:17:30,455 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Signalling topology 4 is installed 14:17:30,455 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) On cache test_cache we have: new segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; old segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] 14:17:30,455 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) On cache test_cache we have: added segments: []; removed segments: [] 14:17:30,455 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) sending rsp for 8 to InfinispanNodeFailureTest-NodeC-7981 14:17:30,455 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Topology update processed, stateTransferTopologyId = 4, startRebalance = true, pending CH = PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]} 14:17:30,455 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeC-7981: #3, conn_id=2) 14:17:30,455 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Signalling transaction data received for topology 4 14:17:30,455 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeC-7981, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=8, rsp_expected=true, UNICAST3: DATA, seqno=3, conn_id=2, TP: [cluster_name=ISPN] 14:17:30,455 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Finished receiving of segments for cache test_cache for topology 4. 14:17:30,455 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Stop keeping track of changed keys for state transfer 14:17:30,455 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,455 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Tracking is disabled. Clear tracker: {} 14:17:30,455 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 0 14:17:30,455 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t5) Attempting to execute command on self: CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_CONFIRM, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=4, rebalanceId=3, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} 14:17:30,455 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,456 DEBUG [org.infinispan.CLUSTER] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t5) ISPN000328: Finished local rebalance for cache test_cache on node InfinispanNodeFailureTest-NodeA-7443, topology id = 4 14:17:30,456 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Removing no longer owned entries for cache test_cache 14:17:30,456 DEBUG [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Topology changed, recalculating minTopologyId 14:17:30,456 INFO [org.infinispan.CLUSTER] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t5) [Context=test_cache][Scope=InfinispanNodeFailureTest-NodeA-7443]ISPN100003: Finished local rebalance 14:17:30,456 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Changing minimum topology ID from 3 to 4 14:17:30,456 TRACE [org.infinispan.topology.RebalanceConfirmationCollector] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t5) Rebalance confirmation collector 4@test_cache received confirmation for InfinispanNodeFailureTest-NodeA-7443, remaining list is [InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,456 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 2 msgs (1631 bytes (8155.00% of max_bundle_size) to 2 dests(s): [ISPN:InfinispanNodeFailureTest-NodeC-7981, ISPN] 14:17:30,456 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (643 bytes) 14:17:30,456 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (994 bytes) 14:17:30,456 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (994 bytes) 14:17:30,456 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: InfinispanNodeFailureTest-NodeC-7981, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=568 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=8, rsp_expected=true, UNICAST3: DATA, seqno=3, conn_id=2, TP: [cluster_name=ISPN] 14:17:30,456 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #3, conn_id=2) 14:17:30,456 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#3 14:17:30,456 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=940 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=8], TP: [cluster_name=ISPN] 14:17:30,456 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=940 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=8], TP: [cluster_name=ISPN] 14:17:30,456 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#8 14:17:30,456 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received InfinispanNodeFailureTest-NodeA-7443#8 14:17:30,456 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#8 14:17:30,456 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#8 14:17:30,457 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,457 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,457 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Responses: sender=InfinispanNodeFailureTest-NodeA-7443value=SuccessfulResponse{responseValue=StatusResponse{cacheJoinInfo=null, cacheTopology=CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]}, stableTopology=CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]}}} , received=true, suspected=false 14:17:30,457 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) Waiting on view 2 being accepted 14:17:30,457 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Updating local topology for cache test_cache: CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,457 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_START, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,457 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_START, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,457 TRACE [org.infinispan.topology.CacheTopology] (testng-InfinispanNodeFailureTest) Current consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,457 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,457 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (testng-InfinispanNodeFailureTest) Installing new cache topology CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} on cache test_cache 14:17:30,457 TRACE [org.jgroups.protocols.MFC] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeA-7443 used 940 credits, 1996543 remaining 14:17:30,457 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Added a new task directly: 0 task(s) are waiting 14:17:30,457 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (testng-InfinispanNodeFailureTest) Received new topology for cache test_cache, isRebalance = false, isMember = false, topology = CacheTopology{id=3, rebalanceId=2, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,457 TRACE [org.jgroups.protocols.MFC] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeA-7443 used 940 credits, 1996543 remaining 14:17:30,457 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (testng-InfinispanNodeFailureTest) Signalling topology 3 is installed 14:17:30,457 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (testng-InfinispanNodeFailureTest) On cache test_cache we have: added segments: [] 14:17:30,457 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (testng-InfinispanNodeFailureTest) Topology update processed, stateTransferTopologyId = -1, startRebalance = false, pending CH = null 14:17:30,457 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (testng-InfinispanNodeFailureTest) Signalling transaction data received for topology 3 14:17:30,457 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 0 14:17:30,457 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,458 DEBUG [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Topology changed, recalculating minTopologyId 14:17:30,458 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Changing minimum topology ID from -1 to 3 14:17:30,458 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Updating stable topology for cache test_cache: CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,458 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Waiting on view 2 being accepted 14:17:30,458 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Updating local topology for cache test_cache: CacheTopology{id=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,458 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (testng-InfinispanNodeFailureTest) StateTransferManager of cache test_cache on node InfinispanNodeFailureTest-NodeC-7981 received initial topology CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,458 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) dests=[InfinispanNodeFailureTest-NodeA-7443], command=CacheTopologyControlCommand{cache=test_cache, type=POLICY_GET_STATUS, sender=InfinispanNodeFailureTest-NodeC-7981, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2}, mode=SYNCHRONOUS, timeout=240000 14:17:30,458 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Waiting on view 2 being accepted 14:17:30,458 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (testng-InfinispanNodeFailureTest) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=POLICY_GET_STATUS, sender=InfinispanNodeFailureTest-NodeC-7981, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} to single recipient InfinispanNodeFailureTest-NodeA-7443 with response mode GET_ALL 14:17:30,458 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Updating local topology for cache test_cache: CacheTopology{id=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,458 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Starting local rebalance for cache test_cache, topology = CacheTopology{id=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,458 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Current consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,458 TRACE [org.jgroups.blocks.RequestCorrelator] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: invoking unicast RPC [req-id=9] on InfinispanNodeFailureTest-NodeA-7443 14:17:30,458 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Pending consistent hash's routing table: [0, 0, 2, 1, 2, 1, 1, 2, 2, 1, 2, 0, 0, 2, 2, 1, 2, 1, 1, 1, 0, 1, 0, 2, 1, 1, 0, 0, 0, 2, 2, 2, 2, 0, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 0, 2, 2, 1, 1, 2, 0, 0, 0, 0, 0, 2, 2, 0, 2, 0] 14:17:30,458 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #6, conn_id=0) 14:17:30,458 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Installing new cache topology CacheTopology{id=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, unionCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31, InfinispanNodeFailureTest-NodeC-7981: 0]}, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} on cache test_cache 14:17:30,459 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) This is the first topology 4 in which the local node is a member 14:17:30,459 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeC-7981, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=9, rsp_expected=true, UNICAST3: DATA, seqno=6, TP: [cluster_name=ISPN] 14:17:30,459 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Received new topology for cache test_cache, isRebalance = true, isMember = true, topology = CacheTopology{id=4, rebalanceId=3, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, unionCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31, InfinispanNodeFailureTest-NodeC-7981: 0]}, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,459 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Start keeping track of keys for rebalance 14:17:30,459 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,459 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Tracking is disabled. Clear tracker: {} 14:17:30,459 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Set track to PUT_FOR_STATE_TRANSFER = true 14:17:30,459 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Signalling topology 4 is installed 14:17:30,459 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) On cache test_cache we have: new segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; old segments: [] 14:17:30,459 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) On cache test_cache we have: added segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; removed segments: [] 14:17:30,459 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Adding inbound state transfer for segments [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] of cache test_cache 14:17:30,459 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Requesting transactions for segments [0, 1, 7, 10, 11, 12, 14, 16, 20, 22, 23, 26, 27, 28, 29, 30, 32, 33, 36, 45, 49, 50, 51, 52, 53, 54, 57, 58, 59] of cache test_cache from node InfinispanNodeFailureTest-NodeA-7443 14:17:30,459 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) InfinispanNodeFailureTest-NodeC-7981 invoking StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeC-7981, type=GET_TRANSACTIONS, topologyId=4, segments=[0, 1, 7, 10, 11, 12, 14, 16, 20, 22, 23, 26, 27, 28, 29, 30, 32, 33, 36, 45, 49, 50, 51, 52, 53, 54, 57, 58, 59]} to recipient list [InfinispanNodeFailureTest-NodeA-7443] with options RpcOptions{timeout=240000, unit=MILLISECONDS, deliverOrder=NONE, responseFilter=null, responseMode=SYNCHRONOUS} 14:17:30,459 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) dests=[InfinispanNodeFailureTest-NodeA-7443], command=StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeC-7981, type=GET_TRANSACTIONS, topologyId=4, segments=[0, 1, 7, 10, 11, 12, 14, 16, 20, 22, 23, 26, 27, 28, 29, 30, 32, 33, 36, 45, 49, 50, 51, 52, 53, 54, 57, 58, 59]}, mode=SYNCHRONOUS, timeout=240000 14:17:30,459 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Replication task sending StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeC-7981, type=GET_TRANSACTIONS, topologyId=4, segments=[0, 1, 7, 10, 11, 12, 14, 16, 20, 22, 23, 26, 27, 28, 29, 30, 32, 33, 36, 45, 49, 50, 51, 52, 53, 54, 57, 58, 59]} to single recipient InfinispanNodeFailureTest-NodeA-7443 with response mode GET_ALL 14:17:30,460 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: sending 1 msgs (94 bytes (470.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,460 TRACE [org.jgroups.blocks.RequestCorrelator] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) InfinispanNodeFailureTest-NodeC-7981: invoking unicast RPC [req-id=10] on InfinispanNodeFailureTest-NodeA-7443 14:17:30,460 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) dest=127.0.0.1:7900 (97 bytes) 14:17:30,460 TRACE [org.jgroups.protocols.UNICAST3] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) InfinispanNodeFailureTest-NodeC-7981 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #7, conn_id=0) 14:17:30,460 TRACE [org.jgroups.protocols.TCP_NIO2] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) InfinispanNodeFailureTest-NodeC-7981: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeC-7981, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=10, rsp_expected=true, UNICAST3: DATA, seqno=7, TP: [cluster_name=ISPN] 14:17:30,460 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: sending 1 msgs (261 bytes (1305.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,460 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) dest=127.0.0.1:7900 (264 bytes) 14:17:30,458 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Starting local rebalance for cache test_cache, topology = CacheTopology{id=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,460 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Current consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,460 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Pending consistent hash's routing table: [0, 0, 2, 1, 2, 1, 1, 2, 2, 1, 2, 0, 0, 2, 2, 1, 2, 1, 1, 1, 0, 1, 0, 2, 1, 1, 0, 0, 0, 2, 2, 2, 2, 0, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 0, 2, 2, 1, 1, 2, 0, 0, 0, 0, 0, 2, 2, 0, 2, 0] 14:17:30,461 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Installing new cache topology CacheTopology{id=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, unionCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31, InfinispanNodeFailureTest-NodeC-7981: 0]}, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} on cache test_cache 14:17:30,461 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Received new topology for cache test_cache, isRebalance = true, isMember = true, topology = CacheTopology{id=4, rebalanceId=3, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, unionCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31, InfinispanNodeFailureTest-NodeC-7981: 0]}, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,461 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Start keeping track of keys for rebalance 14:17:30,461 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,461 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Tracking is disabled. Clear tracker: {} 14:17:30,461 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Set track to PUT_FOR_STATE_TRANSFER = true 14:17:30,461 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Signalling topology 4 is installed 14:17:30,461 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) On cache test_cache we have: new segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; old segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] 14:17:30,461 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) On cache test_cache we have: added segments: []; removed segments: [] 14:17:30,461 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Topology update processed, stateTransferTopologyId = 4, startRebalance = true, pending CH = PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]} 14:17:30,461 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Signalling transaction data received for topology 4 14:17:30,461 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Finished receiving of segments for cache test_cache for topology 4. 14:17:30,461 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Stop keeping track of changed keys for state transfer 14:17:30,461 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,461 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Tracking is disabled. Clear tracker: {} 14:17:30,461 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) dests=[InfinispanNodeFailureTest-NodeA-7443], command=CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_CONFIRM, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=null, topologyId=4, rebalanceId=3, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2}, mode=ASYNCHRONOUS, timeout=0 14:17:30,461 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_CONFIRM, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=null, topologyId=4, rebalanceId=3, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} to single recipient InfinispanNodeFailureTest-NodeA-7443 with response mode GET_NONE 14:17:30,462 TRACE [org.jgroups.protocols.UNICAST3] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #11, conn_id=0) 14:17:30,462 TRACE [org.jgroups.protocols.TCP_NIO2] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, UNICAST3: DATA, seqno=11, TP: [cluster_name=ISPN] 14:17:30,462 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (122 bytes (610.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,462 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (125 bytes) 14:17:30,462 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeC-7981 (3 headers), size=22 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=9, rsp_expected=true, UNICAST3: DATA, seqno=6, TP: [cluster_name=ISPN] 14:17:30,462 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeC-7981: #6, conn_id=0) 14:17:30,462 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeC-7981#6 14:17:30,462 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 9 14:17:30,462 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 0 14:17:30,462 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,462 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=POLICY_GET_STATUS, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=0} [sender=InfinispanNodeFailureTest-NodeC-7981] 14:17:30,462 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Removing no longer owned entries for cache test_cache 14:17:30,462 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,462 DEBUG [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Topology changed, recalculating minTopologyId 14:17:30,462 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Changing minimum topology ID from 3 to 4 14:17:30,462 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) About to send back response SuccessfulResponse{responseValue=true} for command CacheTopologyControlCommand{cache=test_cache, type=POLICY_GET_STATUS, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=0} 14:17:30,463 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) sending rsp for 9 to InfinispanNodeFailureTest-NodeC-7981 14:17:30,463 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeC-7981: #4, conn_id=2) 14:17:30,463 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeC-7981, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=9, rsp_expected=true, UNICAST3: DATA, seqno=4, conn_id=2, TP: [cluster_name=ISPN] 14:17:30,463 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeC-7981 (3 headers), size=189 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=10, rsp_expected=true, UNICAST3: DATA, seqno=7, TP: [cluster_name=ISPN] 14:17:30,463 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=51 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, UNICAST3: DATA, seqno=11, TP: [cluster_name=ISPN] 14:17:30,463 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeC-7981: #7, conn_id=0) 14:17:30,463 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeC-7981#7 14:17:30,463 TRACE [org.jgroups.protocols.UNICAST3] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #11, conn_id=0) 14:17:30,463 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (77 bytes (385.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeC-7981] 14:17:30,463 TRACE [org.jgroups.protocols.UNICAST3] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#11 14:17:30,463 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 10 14:17:30,463 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,463 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (80 bytes) 14:17:30,463 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_CONFIRM, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=null, topologyId=4, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeB-62629] 14:17:30,463 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,463 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Attempting to execute CacheRpcCommand: StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeC-7981, type=GET_TRANSACTIONS, topologyId=4, segments=[0, 1, 7, 10, 11, 12, 14, 16, 20, 22, 23, 26, 27, 28, 29, 30, 32, 33, 36, 45, 49, 50, 51, 52, 53, 54, 57, 58, 59]} [sender=InfinispanNodeFailureTest-NodeC-7981] 14:17:30,463 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,463 DEBUG [org.infinispan.CLUSTER] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) ISPN000328: Finished local rebalance for cache test_cache on node InfinispanNodeFailureTest-NodeB-62629, topology id = 4 14:17:30,463 TRACE [org.infinispan.remoting.inboundhandler.NonTotalOrderTxPerCacheInboundInvocationHandler] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Calling perform() on StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeC-7981, type=GET_TRANSACTIONS, topologyId=4, segments=[0, 1, 7, 10, 11, 12, 14, 16, 20, 22, 23, 26, 27, 28, 29, 30, 32, 33, 36, 45, 49, 50, 51, 52, 53, 54, 57, 58, 59]} 14:17:30,463 TRACE [org.infinispan.statetransfer.StateProviderImpl] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Received request for transactions from node InfinispanNodeFailureTest-NodeC-7981 for segments [0, 1, 7, 10, 11, 12, 14, 16, 20, 22, 23, 26, 27, 28, 29, 30, 32, 33, 36, 45, 49, 50, 51, 52, 53, 54, 57, 58, 59] of cache test_cache with topology id 4 14:17:30,463 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: InfinispanNodeFailureTest-NodeC-7981, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=5 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=9, rsp_expected=true, UNICAST3: DATA, seqno=4, conn_id=2, TP: [cluster_name=ISPN] 14:17:30,463 INFO [org.infinispan.CLUSTER] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) [Context=test_cache][Scope=InfinispanNodeFailureTest-NodeB-62629]ISPN100003: Finished local rebalance 14:17:30,463 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #4, conn_id=2) 14:17:30,463 TRACE [org.infinispan.statetransfer.StateProviderImpl] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Found 0 transaction(s) to transfer 14:17:30,463 TRACE [org.infinispan.topology.RebalanceConfirmationCollector] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t6) Rebalance confirmation collector 4@test_cache received confirmation for InfinispanNodeFailureTest-NodeB-62629, remaining list is [InfinispanNodeFailureTest-NodeC-7981] 14:17:30,463 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#4 14:17:30,463 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) About to send back response SuccessfulResponse{responseValue=[]} for command StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeC-7981, type=GET_TRANSACTIONS, topologyId=4, segments=[0, 1, 7, 10, 11, 12, 14, 16, 20, 22, 23, 26, 27, 28, 29, 30, 32, 33, 36, 45, 49, 50, 51, 52, 53, 54, 57, 58, 59]} 14:17:30,463 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Responses: sender=InfinispanNodeFailureTest-NodeA-7443value=SuccessfulResponse{responseValue=true} , received=true, suspected=false 14:17:30,463 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) sending rsp for 10 to InfinispanNodeFailureTest-NodeC-7981 14:17:30,463 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeC-7981: #5, conn_id=2) 14:17:30,464 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeC-7981, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=10, rsp_expected=true, UNICAST3: DATA, seqno=5, conn_id=2, TP: [cluster_name=ISPN] 14:17:30,464 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (80 bytes (400.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeC-7981] 14:17:30,464 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (83 bytes) 14:17:30,464 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: InfinispanNodeFailureTest-NodeC-7981, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=8 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=10, rsp_expected=true, UNICAST3: DATA, seqno=5, conn_id=2, TP: [cluster_name=ISPN] 14:17:30,464 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #5, conn_id=2) 14:17:30,464 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#5 14:17:30,464 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Responses: sender=InfinispanNodeFailureTest-NodeA-7443value=SuccessfulResponse{responseValue=[]} , received=true, suspected=false 14:17:30,464 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Response(s) to StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeC-7981, type=GET_TRANSACTIONS, topologyId=4, segments=[0, 1, 7, 10, 11, 12, 14, 16, 20, 22, 23, 26, 27, 28, 29, 30, 32, 33, 36, 45, 49, 50, 51, 52, 53, 54, 57, 58, 59]} is {InfinispanNodeFailureTest-NodeA-7443=SuccessfulResponse{responseValue=[]} } 14:17:30,464 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Applying 0 transactions for cache test_cache transferred from node InfinispanNodeFailureTest-NodeA-7443 14:17:30,464 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Requesting transactions for segments [2, 3, 4, 5, 6, 8, 9, 13, 15, 17, 18, 19, 21, 24, 25, 31, 34, 35, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 55, 56] of cache test_cache from node InfinispanNodeFailureTest-NodeB-62629 14:17:30,464 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) InfinispanNodeFailureTest-NodeC-7981 invoking StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeC-7981, type=GET_TRANSACTIONS, topologyId=4, segments=[2, 3, 4, 5, 6, 8, 9, 13, 15, 17, 18, 19, 21, 24, 25, 31, 34, 35, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 55, 56]} to recipient list [InfinispanNodeFailureTest-NodeB-62629] with options RpcOptions{timeout=240000, unit=MILLISECONDS, deliverOrder=NONE, responseFilter=null, responseMode=SYNCHRONOUS} 14:17:30,464 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) dests=[InfinispanNodeFailureTest-NodeB-62629], command=StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeC-7981, type=GET_TRANSACTIONS, topologyId=4, segments=[2, 3, 4, 5, 6, 8, 9, 13, 15, 17, 18, 19, 21, 24, 25, 31, 34, 35, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 55, 56]}, mode=SYNCHRONOUS, timeout=240000 14:17:30,465 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Replication task sending StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeC-7981, type=GET_TRANSACTIONS, topologyId=4, segments=[2, 3, 4, 5, 6, 8, 9, 13, 15, 17, 18, 19, 21, 24, 25, 31, 34, 35, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 55, 56]} to single recipient InfinispanNodeFailureTest-NodeB-62629 with response mode GET_ALL 14:17:30,465 TRACE [org.jgroups.blocks.RequestCorrelator] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) InfinispanNodeFailureTest-NodeC-7981: invoking unicast RPC [req-id=11] on InfinispanNodeFailureTest-NodeB-62629 14:17:30,465 TRACE [org.jgroups.protocols.UNICAST3] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) InfinispanNodeFailureTest-NodeC-7981: created sender window for InfinispanNodeFailureTest-NodeB-62629 (conn-id=1) 14:17:30,465 TRACE [org.jgroups.protocols.UNICAST3] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) InfinispanNodeFailureTest-NodeC-7981 --> DATA(InfinispanNodeFailureTest-NodeB-62629: #1, conn_id=1, first) 14:17:30,465 TRACE [org.jgroups.protocols.TCP_NIO2] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) InfinispanNodeFailureTest-NodeC-7981: sending msg to InfinispanNodeFailureTest-NodeB-62629, src=InfinispanNodeFailureTest-NodeC-7981, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=11, rsp_expected=true, UNICAST3: DATA, seqno=1, conn_id=1, first, TP: [cluster_name=ISPN] 14:17:30,465 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: sending 1 msgs (271 bytes (1355.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeB-62629] 14:17:30,465 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) dest=127.0.0.1:7901 (274 bytes) 14:17:30,465 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) 127.0.0.1:7902: connecting to 127.0.0.1:7901 14:17:30,466 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (testng-InfinispanNodeFailureTest) Waiting for initial state transfer to finish for cache test_cache on InfinispanNodeFailureTest-NodeC-7981 14:17:30,466 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: InfinispanNodeFailureTest-NodeB-62629, src: InfinispanNodeFailureTest-NodeC-7981 (3 headers), size=199 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=11, rsp_expected=true, UNICAST3: DATA, seqno=1, conn_id=1, first, TP: [cluster_name=ISPN] 14:17:30,466 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629 <-- DATA(InfinispanNodeFailureTest-NodeC-7981: #1, conn_id=1, first) 14:17:30,466 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: created receiver window for InfinispanNodeFailureTest-NodeC-7981 at seqno=#1 for conn-id=1 14:17:30,466 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeC-7981#1 14:17:30,466 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 11 14:17:30,466 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute CacheRpcCommand: StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeC-7981, type=GET_TRANSACTIONS, topologyId=4, segments=[2, 3, 4, 5, 6, 8, 9, 13, 15, 17, 18, 19, 21, 24, 25, 31, 34, 35, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 55, 56]} [sender=InfinispanNodeFailureTest-NodeC-7981] 14:17:30,466 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,466 TRACE [org.infinispan.remoting.inboundhandler.NonTotalOrderTxPerCacheInboundInvocationHandler] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t6) Calling perform() on StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeC-7981, type=GET_TRANSACTIONS, topologyId=4, segments=[2, 3, 4, 5, 6, 8, 9, 13, 15, 17, 18, 19, 21, 24, 25, 31, 34, 35, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 55, 56]} 14:17:30,466 TRACE [org.infinispan.statetransfer.StateProviderImpl] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t6) Received request for transactions from node InfinispanNodeFailureTest-NodeC-7981 for segments [2, 3, 4, 5, 6, 8, 9, 13, 15, 17, 18, 19, 21, 24, 25, 31, 34, 35, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 55, 56] of cache test_cache with topology id 4 14:17:30,466 TRACE [org.infinispan.statetransfer.StateProviderImpl] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t6) Found 0 transaction(s) to transfer 14:17:30,467 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t6) About to send back response SuccessfulResponse{responseValue=[]} for command StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeC-7981, type=GET_TRANSACTIONS, topologyId=4, segments=[2, 3, 4, 5, 6, 8, 9, 13, 15, 17, 18, 19, 21, 24, 25, 31, 34, 35, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 55, 56]} 14:17:30,467 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t6) sending rsp for 11 to InfinispanNodeFailureTest-NodeC-7981 14:17:30,467 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t6) InfinispanNodeFailureTest-NodeB-62629: created sender window for InfinispanNodeFailureTest-NodeC-7981 (conn-id=1) 14:17:30,467 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t6) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeC-7981: #1, conn_id=1, first) 14:17:30,467 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t6) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeC-7981, src=InfinispanNodeFailureTest-NodeB-62629, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=11, rsp_expected=true, UNICAST3: DATA, seqno=1, conn_id=1, first, TP: [cluster_name=ISPN] 14:17:30,467 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (80 bytes (400.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeC-7981] 14:17:30,467 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7902 (83 bytes) 14:17:30,467 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: InfinispanNodeFailureTest-NodeC-7981, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=8 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=11, rsp_expected=true, UNICAST3: DATA, seqno=1, conn_id=1, first, TP: [cluster_name=ISPN] 14:17:30,467 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #1, conn_id=1, first) 14:17:30,467 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: created receiver window for InfinispanNodeFailureTest-NodeB-62629 at seqno=#1 for conn-id=1 14:17:30,467 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeB-62629#1 14:17:30,468 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Responses: sender=InfinispanNodeFailureTest-NodeB-62629value=SuccessfulResponse{responseValue=[]} , received=true, suspected=false 14:17:30,468 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Response(s) to StateRequestCommand{cache=test_cache, origin=InfinispanNodeFailureTest-NodeC-7981, type=GET_TRANSACTIONS, topologyId=4, segments=[2, 3, 4, 5, 6, 8, 9, 13, 15, 17, 18, 19, 21, 24, 25, 31, 34, 35, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 55, 56]} is {InfinispanNodeFailureTest-NodeB-62629=SuccessfulResponse{responseValue=[]} } 14:17:30,468 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Applying 0 transactions for cache test_cache transferred from node InfinispanNodeFailureTest-NodeB-62629 14:17:30,468 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Finished adding inbound state transfer for segments [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] of cache test_cache 14:17:30,468 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Topology update processed, stateTransferTopologyId = 4, startRebalance = true, pending CH = PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]} 14:17:30,468 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Signalling transaction data received for topology 4 14:17:30,468 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Finished receiving of segments for cache test_cache for topology 4. 14:17:30,468 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Stop keeping track of changed keys for state transfer 14:17:30,468 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,468 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Tracking is disabled. Clear tracker: {} 14:17:30,468 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) dests=[InfinispanNodeFailureTest-NodeA-7443], command=CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_CONFIRM, sender=InfinispanNodeFailureTest-NodeC-7981, joinInfo=null, topologyId=4, rebalanceId=3, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2}, mode=ASYNCHRONOUS, timeout=0 14:17:30,468 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_CONFIRM, sender=InfinispanNodeFailureTest-NodeC-7981, joinInfo=null, topologyId=4, rebalanceId=3, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} to single recipient InfinispanNodeFailureTest-NodeA-7443 with response mode GET_NONE 14:17:30,468 TRACE [org.jgroups.protocols.UNICAST3] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) InfinispanNodeFailureTest-NodeC-7981 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #8, conn_id=0) 14:17:30,468 TRACE [org.jgroups.protocols.TCP_NIO2] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) InfinispanNodeFailureTest-NodeC-7981: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeC-7981, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, UNICAST3: DATA, seqno=8, TP: [cluster_name=ISPN] 14:17:30,468 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 0 14:17:30,468 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,468 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: sending 1 msgs (122 bytes (610.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,468 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Removing no longer owned entries for cache test_cache 14:17:30,468 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) dest=127.0.0.1:7900 (125 bytes) 14:17:30,468 DEBUG [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Topology changed, recalculating minTopologyId 14:17:30,468 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t3) Changing minimum topology ID from 3 to 4 14:17:30,469 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeC-7981 (3 headers), size=51 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, UNICAST3: DATA, seqno=8, TP: [cluster_name=ISPN] 14:17:30,469 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeC-7981: #8, conn_id=0) 14:17:30,469 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeC-7981#8 14:17:30,469 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,469 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_CONFIRM, sender=InfinispanNodeFailureTest-NodeC-7981, joinInfo=null, topologyId=4, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeC-7981] 14:17:30,469 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,469 DEBUG [org.infinispan.CLUSTER] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) ISPN000328: Finished local rebalance for cache test_cache on node InfinispanNodeFailureTest-NodeC-7981, topology id = 4 14:17:30,469 INFO [org.infinispan.CLUSTER] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) [Context=test_cache][Scope=InfinispanNodeFailureTest-NodeC-7981]ISPN100003: Finished local rebalance 14:17:30,469 TRACE [org.infinispan.topology.RebalanceConfirmationCollector] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Rebalance confirmation collector 4@test_cache received confirmation for InfinispanNodeFailureTest-NodeC-7981, remaining list is [] 14:17:30,469 INFO [org.infinispan.CLUSTER] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) ISPN000336: Finished cluster-wide rebalance for cache test_cache, topology id = 4 14:17:30,469 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Cache test_cache topology updated: CacheTopology{id=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]}, members = [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], joiners = [] 14:17:30,469 TRACE [org.infinispan.topology.CacheTopology] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Current consistent hash's routing table: [0, 0, 2, 1, 2, 1, 1, 2, 2, 1, 2, 0, 0, 2, 2, 1, 2, 1, 1, 1, 0, 1, 0, 2, 1, 1, 0, 0, 0, 2, 2, 2, 2, 0, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 0, 2, 2, 1, 1, 2, 0, 0, 0, 0, 0, 2, 2, 0, 2, 0] 14:17:30,469 DEBUG [org.infinispan.topology.ClusterTopologyManagerImpl] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Updating cluster-wide current topology for cache test_cache, topology = CacheTopology{id=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]}, availability mode = AVAILABLE 14:17:30,470 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) dests=null, command=CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], throwable=null, viewId=2}, mode=ASYNCHRONOUS, timeout=240000 14:17:30,470 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Attempting to execute command on self: CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], throwable=null, viewId=2} 14:17:30,470 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], throwable=null, viewId=2} to addresses null with response mode GET_NONE 14:17:30,470 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Waiting on view 2 being accepted 14:17:30,470 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Updating local topology for cache test_cache: CacheTopology{id=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,470 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Current consistent hash's routing table: [0, 0, 2, 1, 2, 1, 1, 2, 2, 1, 2, 0, 0, 2, 2, 1, 2, 1, 1, 1, 0, 1, 0, 2, 1, 1, 0, 0, 0, 2, 2, 2, 2, 0, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 0, 2, 2, 1, 1, 2, 0, 0, 0, 0, 0, 2, 2, 0, 2, 0] 14:17:30,470 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Installing new cache topology CacheTopology{id=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} on cache test_cache 14:17:30,470 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Received new topology for cache test_cache, isRebalance = false, isMember = true, topology = CacheTopology{id=5, rebalanceId=3, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,470 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#9 14:17:30,470 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Signalling topology 5 is installed 14:17:30,470 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=9], TP: [cluster_name=ISPN] 14:17:30,470 TRACE [org.jgroups.protocols.MFC] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443 used 635 credits, 1995908 remaining 14:17:30,470 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) On cache test_cache we have: new segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; old segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] 14:17:30,470 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Updating stable topology for cache test_cache: CacheTopology{id=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,470 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) On cache test_cache we have: added segments: []; removed segments: [] 14:17:30,470 DEBUG [org.infinispan.topology.ClusterTopologyManagerImpl] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Updating cluster-wide stable topology for cache test_cache, topology = CacheTopology{id=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,470 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (686 bytes (3430.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,470 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Topology update processed, stateTransferTopologyId = 4, startRebalance = false, pending CH = null 14:17:30,470 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) dests=null, command=CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], throwable=null, viewId=2}, mode=ASYNCHRONOUS, timeout=240000 14:17:30,470 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Attempting to execute command on self: CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], throwable=null, viewId=2} 14:17:30,470 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (689 bytes) 14:17:30,470 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], throwable=null, viewId=2} to addresses null with response mode GET_NONE 14:17:30,470 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Stop keeping track of changed keys for state transfer 14:17:30,470 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,470 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Tracking is disabled. Clear tracker: {} 14:17:30,470 TRACE [org.infinispan.stream.impl.LocalStreamManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Data rehash occurred startHash: PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]} and endHash: PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]} with new topology 5 and was pre false 14:17:30,470 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (689 bytes) 14:17:30,470 TRACE [org.infinispan.stream.impl.LocalStreamManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) No change listeners present! 14:17:30,470 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Unlock State Transfer in Progress for topology ID 5 14:17:30,470 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Signalling transaction data received for topology 5 14:17:30,470 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 0 14:17:30,471 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#10 14:17:30,471 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,471 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=10], TP: [cluster_name=ISPN] 14:17:30,471 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Removing no longer owned entries for cache test_cache 14:17:30,471 TRACE [org.jgroups.protocols.MFC] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443 used 634 credits, 1995274 remaining 14:17:30,471 DEBUG [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Topology changed, recalculating minTopologyId 14:17:30,471 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Changing minimum topology ID from 4 to 5 14:17:30,471 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Updating stable topology for cache test_cache: CacheTopology{id=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,471 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (685 bytes (3425.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,471 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (688 bytes) 14:17:30,471 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (688 bytes) 14:17:30,471 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=635 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=9], TP: [cluster_name=ISPN] 14:17:30,471 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received InfinispanNodeFailureTest-NodeA-7443#9 14:17:30,471 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#9 14:17:30,471 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,471 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=634 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=10], TP: [cluster_name=ISPN] 14:17:30,471 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=634 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=10], TP: [cluster_name=ISPN] 14:17:30,471 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received InfinispanNodeFailureTest-NodeA-7443#10 14:17:30,471 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=635 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=9], TP: [cluster_name=ISPN] 14:17:30,471 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#10 14:17:30,471 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#10 14:17:30,471 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#10 14:17:30,471 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#9 14:17:30,471 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,471 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#9 14:17:30,471 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,471 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,472 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,472 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,472 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,472 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Added a new task directly: 0 task(s) are waiting 14:17:30,472 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,472 TRACE [org.jgroups.protocols.MFC] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeA-7443 used 634 credits, 1995909 remaining 14:17:30,472 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,472 TRACE [org.jgroups.protocols.MFC] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeA-7443 used 635 credits, 1995908 remaining 14:17:30,472 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,472 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Added a new task directly: 0 task(s) are waiting 14:17:30,472 TRACE [org.jgroups.protocols.MFC] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeA-7443 used 635 credits, 1995274 remaining 14:17:30,472 TRACE [org.jgroups.protocols.MFC] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeA-7443 used 634 credits, 1995274 remaining 14:17:30,472 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Updating stable topology for cache test_cache: CacheTopology{id=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,472 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Waiting on view 2 being accepted 14:17:30,472 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Updating local topology for cache test_cache: CacheTopology{id=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,472 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Current consistent hash's routing table: [0, 0, 2, 1, 2, 1, 1, 2, 2, 1, 2, 0, 0, 2, 2, 1, 2, 1, 1, 1, 0, 1, 0, 2, 1, 1, 0, 0, 0, 2, 2, 2, 2, 0, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 0, 2, 2, 1, 1, 2, 0, 0, 0, 0, 0, 2, 2, 0, 2, 0] 14:17:30,472 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Installing new cache topology CacheTopology{id=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} on cache test_cache 14:17:30,472 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Waiting on view 2 being accepted 14:17:30,472 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Received new topology for cache test_cache, isRebalance = false, isMember = true, topology = CacheTopology{id=5, rebalanceId=3, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,472 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Updating local topology for cache test_cache: CacheTopology{id=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,473 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Signalling topology 5 is installed 14:17:30,473 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Current consistent hash's routing table: [0, 0, 2, 1, 2, 1, 1, 2, 2, 1, 2, 0, 0, 2, 2, 1, 2, 1, 1, 1, 0, 1, 0, 2, 1, 1, 0, 0, 0, 2, 2, 2, 2, 0, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 0, 2, 2, 1, 1, 2, 0, 0, 0, 0, 0, 2, 2, 0, 2, 0] 14:17:30,473 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Installing new cache topology CacheTopology{id=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} on cache test_cache 14:17:30,473 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) On cache test_cache we have: new segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; old segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] 14:17:30,473 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) On cache test_cache we have: added segments: []; removed segments: [] 14:17:30,473 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Topology update processed, stateTransferTopologyId = 4, startRebalance = false, pending CH = null 14:17:30,473 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Received new topology for cache test_cache, isRebalance = false, isMember = true, topology = CacheTopology{id=5, rebalanceId=3, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,473 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Stop keeping track of changed keys for state transfer 14:17:30,473 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,473 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Tracking is disabled. Clear tracker: {} 14:17:30,473 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Signalling topology 5 is installed 14:17:30,473 TRACE [org.infinispan.stream.impl.LocalStreamManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Data rehash occurred startHash: PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]} and endHash: PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]} with new topology 5 and was pre false 14:17:30,473 TRACE [org.infinispan.stream.impl.LocalStreamManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) No change listeners present! 14:17:30,473 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) On cache test_cache we have: new segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; old segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] 14:17:30,473 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Unlock State Transfer in Progress for topology ID 5 14:17:30,473 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) On cache test_cache we have: added segments: []; removed segments: [] 14:17:30,473 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Signalling transaction data received for topology 5 14:17:30,473 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Topology update processed, stateTransferTopologyId = 4, startRebalance = false, pending CH = null 14:17:30,473 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 0 14:17:30,473 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Stop keeping track of changed keys for state transfer 14:17:30,473 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,473 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,473 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Tracking is disabled. Clear tracker: {} 14:17:30,473 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Removing no longer owned entries for cache test_cache 14:17:30,473 DEBUG [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Topology changed, recalculating minTopologyId 14:17:30,473 TRACE [org.infinispan.stream.impl.LocalStreamManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Data rehash occurred startHash: PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]} and endHash: PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]} with new topology 5 and was pre false 14:17:30,473 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t1) Changing minimum topology ID from 4 to 5 14:17:30,473 TRACE [org.infinispan.stream.impl.LocalStreamManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) No change listeners present! 14:17:30,473 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Unlock State Transfer in Progress for topology ID 5 14:17:30,473 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Signalling transaction data received for topology 5 14:17:30,473 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 0 14:17:30,473 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,473 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Removing no longer owned entries for cache test_cache 14:17:30,473 DEBUG [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Topology changed, recalculating minTopologyId 14:17:30,473 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Changing minimum topology ID from 4 to 5 14:17:30,473 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Initial state transfer complete for cache test_cache on node InfinispanNodeFailureTest-NodeC-7981 14:17:30,473 DEBUG [org.infinispan.cache.impl.CacheImpl] (testng-InfinispanNodeFailureTest) Started cache test_cache on InfinispanNodeFailureTest-NodeC-7981 14:17:30,473 TRACE [org.infinispan.manager.DefaultCacheManager] (testng-InfinispanNodeFailureTest) Closing latch for cache test_cache 14:17:30,473 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeC-p12-t4) Updating stable topology for cache test_cache: CacheTopology{id=5, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[InfinispanNodeFailureTest-NodeA-7443: 18, InfinispanNodeFailureTest-NodeB-62629: 21, InfinispanNodeFailureTest-NodeC-7981: 21]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3, 4e681186-1323-4fd9-bb28-fee04bf0f3a5]} 14:17:30,564 TRACE [org.jgroups.protocols.UNICAST3] (Timer-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 --> ACK(InfinispanNodeFailureTest-NodeB-62629: #11) 14:17:30,564 TRACE [org.jgroups.protocols.TCP_NIO2] (Timer-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeB-62629, src=InfinispanNodeFailureTest-NodeA-7443, headers are UNICAST3: ACK, seqno=11, ts=1, TP: [cluster_name=ISPN] 14:17:30,564 TRACE [org.jgroups.protocols.UNICAST3] (Timer-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 --> ACK(InfinispanNodeFailureTest-NodeC-7981: #8) 14:17:30,565 TRACE [org.jgroups.protocols.TCP_NIO2] (Timer-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeC-7981, src=InfinispanNodeFailureTest-NodeA-7443, headers are UNICAST3: ACK, seqno=8, ts=2, TP: [cluster_name=ISPN] 14:17:30,565 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (60 bytes (300.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeB-62629] 14:17:30,565 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (63 bytes) 14:17:30,565 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (60 bytes (300.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeC-7981] 14:17:30,565 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (63 bytes) 14:17:30,565 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: InfinispanNodeFailureTest-NodeB-62629, src: InfinispanNodeFailureTest-NodeA-7443 (2 headers), size=0 bytes, flags=INTERNAL], headers are UNICAST3: ACK, seqno=11, ts=1, TP: [cluster_name=ISPN] 14:17:30,565 TRACE [org.jgroups.protocols.UNICAST3] (INT-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629 <-- ACK(InfinispanNodeFailureTest-NodeA-7443: #11, conn-id=0, ts=1) 14:17:30,565 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: InfinispanNodeFailureTest-NodeC-7981, src: InfinispanNodeFailureTest-NodeA-7443 (2 headers), size=0 bytes, flags=INTERNAL], headers are UNICAST3: ACK, seqno=8, ts=2, TP: [cluster_name=ISPN] 14:17:30,565 TRACE [org.jgroups.protocols.UNICAST3] (INT-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981 <-- ACK(InfinispanNodeFailureTest-NodeA-7443: #8, conn-id=0, ts=2) 14:17:30,578 TRACE [org.infinispan.test.TestingUtil] (testng-InfinispanNodeFailureTest) Node InfinispanNodeFailureTest-NodeA-7443 finished state transfer. 14:17:30,578 TRACE [org.infinispan.test.TestingUtil] (testng-InfinispanNodeFailureTest) Node InfinispanNodeFailureTest-NodeB-62629 finished state transfer. 14:17:30,578 TRACE [org.infinispan.test.TestingUtil] (testng-InfinispanNodeFailureTest) Node InfinispanNodeFailureTest-NodeC-7981 finished state transfer. 14:17:30,578 TRACE [org.infinispan.test.TestingUtil] (testng-InfinispanNodeFailureTest) Node InfinispanNodeFailureTest-NodeA-7443 finished state transfer. 14:17:30,579 TRACE [org.infinispan.test.TestingUtil] (testng-InfinispanNodeFailureTest) Node InfinispanNodeFailureTest-NodeB-62629 finished state transfer. 14:17:30,579 TRACE [org.infinispan.test.TestingUtil] (testng-InfinispanNodeFailureTest) Node InfinispanNodeFailureTest-NodeC-7981 finished state transfer. 14:17:30,581 TRACE [org.infinispan.transaction.tm.DummyTransaction] (testng-InfinispanNodeFailureTest) Created new transaction with Xid=DummyXid{id=1} 14:17:30,581 TRACE [org.infinispan.cache.impl.CacheImpl] (testng-InfinispanNodeFailureTest) Implicit transaction started! Transaction: DummyTransaction{xid=DummyXid{id=1}, status=0} 14:17:30,582 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Created a new local transaction: LocalXaTransaction{xid=null} LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, lockedKeys=[], backupKeyLocks=[], topologyId=5, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@1 14:17:30,583 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (testng-InfinispanNodeFailureTest) Invoked with command PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true} and InvocationContext [org.infinispan.context.impl.LocalTxInvocationContext@4d84d306] 14:17:30,584 TRACE [org.infinispan.statetransfer.StateTransferInterceptor] (testng-InfinispanNodeFailureTest) handleTxWriteCommand for command PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}, origin null 14:17:30,585 TRACE [org.infinispan.transaction.tm.DummyTransaction] (testng-InfinispanNodeFailureTest) Transaction.enlistResource(TransactionXaAdapter{localTransaction=LocalXaTransaction{xid=null} LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, lockedKeys=[], backupKeyLocks=[], topologyId=5, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@1}) invoked in transaction with Xid=DummyXid{id=1} 14:17:30,585 TRACE [org.infinispan.transaction.tm.DummyTransaction] (testng-InfinispanNodeFailureTest) XaResource.start() invoked in transaction with Xid=DummyXid{id=1} 14:17:30,585 TRACE [org.infinispan.transaction.xa.XaTransactionTable] (testng-InfinispanNodeFailureTest) start called on tx GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1 14:17:30,585 TRACE [org.infinispan.interceptors.locking.PessimisticLockingInterceptor] (testng-InfinispanNodeFailureTest) acquireLocalLock 14:17:30,585 TRACE [org.infinispan.interceptors.locking.PessimisticLockingInterceptor] (testng-InfinispanNodeFailureTest) Acquiring locks on MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}. 14:17:30,586 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultPendingLockManager] (testng-InfinispanNodeFailureTest) Await for pending transactions for transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1 using null 14:17:30,586 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultPendingLockManager] (testng-InfinispanNodeFailureTest) Locking key MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, no need to check for pending locks. 14:17:30,586 TRACE [org.infinispan.transaction.impl.AbstractCacheTransaction] (testng-InfinispanNodeFailureTest) Registering locked key: MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} 14:17:30,586 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultLockManager] (testng-InfinispanNodeFailureTest) Lock key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} for owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1. timeout=20000 (MILLISECONDS) 14:17:30,590 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (testng-InfinispanNodeFailureTest) Acquire lock for GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1. Timeout=20000 (MILLISECONDS) 14:17:30,590 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (testng-InfinispanNodeFailureTest) Created a new one: LockPlaceHolder{lockState=WAITING, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1} 14:17:30,590 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (testng-InfinispanNodeFailureTest) Try acquire. Next in queue=LockPlaceHolder{lockState=WAITING, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1}. Current=null 14:17:30,590 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (testng-InfinispanNodeFailureTest) Lock Owner CAS(null, LockPlaceHolder{lockState=WAITING, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1}) => true 14:17:30,590 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (testng-InfinispanNodeFailureTest) State changed for LockPlaceHolder{lockState=ACQUIRED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1}. WAITING => ACQUIRED 14:17:30,590 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (testng-InfinispanNodeFailureTest) LockPlaceHolder{lockState=ACQUIRED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1} successfully acquired the lock. 14:17:30,591 TRACE [org.infinispan.container.EntryFactoryImpl] (testng-InfinispanNodeFailureTest) Exists in context? null 14:17:30,591 TRACE [org.infinispan.container.EntryFactoryImpl] (testng-InfinispanNodeFailureTest) Retrieved from container null 14:17:30,591 TRACE [org.infinispan.container.EntryFactoryImpl] (testng-InfinispanNodeFailureTest) Creating new entry for key MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} 14:17:30,592 TRACE [org.infinispan.container.EntryFactoryImpl] (testng-InfinispanNodeFailureTest) Updated context entry ReadCommittedEntry(228651d3){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=null, isCreated=true, isChanged=false, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=null} 14:17:30,592 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (testng-InfinispanNodeFailureTest) Invoking: PutKeyValueCommand 14:17:30,593 TRACE [org.infinispan.interceptors.impl.EntryWrappingInterceptor] (testng-InfinispanNodeFailureTest) The return value is null 14:17:30,594 TRACE [org.infinispan.transaction.impl.LocalTransaction] (testng-InfinispanNodeFailureTest) Adding modification PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}. Mod list is null 14:17:30,595 TRACE [org.infinispan.cache.impl.CacheImpl] (testng-InfinispanNodeFailureTest) Committing transaction as it was implicit: DummyTransaction{xid=DummyXid{id=1}, status=0} 14:17:30,595 TRACE [org.infinispan.transaction.tm.DummyTransaction] (testng-InfinispanNodeFailureTest) Transaction.commit() invoked in transaction with Xid=DummyXid{id=1} 14:17:30,595 TRACE [org.infinispan.transaction.tm.DummyTransaction] (testng-InfinispanNodeFailureTest) runPrepare() invoked in transaction with Xid=DummyXid{id=1} 14:17:30,598 TRACE [org.infinispan.transaction.tm.DummyTransaction] (testng-InfinispanNodeFailureTest) XAResource.end() for TransactionXaAdapter{localTransaction=LocalXaTransaction{xid=DummyXid{id=1}} LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, lockedKeys=[MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}], backupKeyLocks=[], topologyId=5, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@1} 14:17:30,598 TRACE [org.infinispan.transaction.xa.XaTransactionTable] (testng-InfinispanNodeFailureTest) end called on tx GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1(test_cache) 14:17:30,598 TRACE [org.infinispan.transaction.tm.DummyTransaction] (testng-InfinispanNodeFailureTest) XaResource.prepare() for TransactionXaAdapter{localTransaction=LocalXaTransaction{xid=DummyXid{id=1}} LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, lockedKeys=[MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}], backupKeyLocks=[], topologyId=5, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@1} 14:17:30,598 TRACE [org.infinispan.transaction.impl.TransactionCoordinator] (testng-InfinispanNodeFailureTest) Received prepare for tx: LocalXaTransaction{xid=DummyXid{id=1}} LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, lockedKeys=[MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}], backupKeyLocks=[], topologyId=5, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@1. Skipping call as 1PC will be used. 14:17:30,598 TRACE [org.infinispan.transaction.tm.DummyTransaction] (testng-InfinispanNodeFailureTest) runCommit(forceRollback=false) invoked in transaction with Xid=DummyXid{id=1} 14:17:30,598 TRACE [org.infinispan.transaction.tm.DummyTransaction] (testng-InfinispanNodeFailureTest) XaResource.commit() for TransactionXaAdapter{localTransaction=LocalXaTransaction{xid=DummyXid{id=1}} LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, lockedKeys=[MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}], backupKeyLocks=[], topologyId=5, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@1} 14:17:30,598 TRACE [org.infinispan.transaction.impl.TransactionCoordinator] (testng-InfinispanNodeFailureTest) Committing transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1 14:17:30,598 TRACE [org.infinispan.transaction.impl.TransactionCoordinator] (testng-InfinispanNodeFailureTest) Doing an 1PC prepare call on the interceptor chain 14:17:30,599 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (testng-InfinispanNodeFailureTest) Invoked with command PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=-1} and InvocationContext [org.infinispan.context.impl.LocalTxInvocationContext@169c80cd] 14:17:30,599 TRACE [org.infinispan.statetransfer.StateTransferInterceptor] (testng-InfinispanNodeFailureTest) handleTxCommand for command PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=-1}, origin null 14:17:30,600 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (testng-InfinispanNodeFailureTest) Invoking: PrepareCommand 14:17:30,600 TRACE [org.infinispan.interceptors.distribution.BaseDistributionInterceptor] (testng-InfinispanNodeFailureTest) Should invoke remotely? true. hasModifications=true, hasRemoteLocksAcquired=false 14:17:30,601 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443 invoking PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5} to recipient list null with options RpcOptions{timeout=15000, unit=MILLISECONDS, deliverOrder=NONE, responseFilter=null, responseMode=SYNCHRONOUS_IGNORE_LEAVERS} 14:17:30,601 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) dests=null, command=PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5}, mode=SYNCHRONOUS_IGNORE_LEAVERS, timeout=15000 14:17:30,601 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (testng-InfinispanNodeFailureTest) Replication task sending PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5} to addresses null with response mode GET_ALL 14:17:30,602 TRACE [org.jgroups.blocks.RequestCorrelator] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: invoking multicast RPC [req-id=12] 14:17:30,602 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#11 14:17:30,602 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=12, rsp_expected=true, NAKACK2: [MSG, seqno=11], TP: [cluster_name=ISPN] 14:17:30,602 TRACE [org.jgroups.protocols.MFC] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443 used 301 credits, 1994973 remaining 14:17:30,602 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (353 bytes (1765.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,602 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (356 bytes) 14:17:30,603 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (356 bytes) 14:17:30,603 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=301 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=12, rsp_expected=true, NAKACK2: [MSG, seqno=11], TP: [cluster_name=ISPN] 14:17:30,603 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#11 14:17:30,603 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=301 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=12, rsp_expected=true, NAKACK2: [MSG, seqno=11], TP: [cluster_name=ISPN] 14:17:30,603 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#11 14:17:30,603 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received InfinispanNodeFailureTest-NodeA-7443#11 14:17:30,603 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 12 14:17:30,603 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#11 14:17:30,603 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 12 14:17:30,604 TRACE [org.infinispan.interceptors.impl.AsyncInvocationStage] (testng-InfinispanNodeFailureTest) 6A779886 thenAccept org.infinispan.interceptors.impl.AsyncInvocationStage@149a5bf7 = 149A5BF7 14:17:30,605 TRACE [org.infinispan.interceptors.impl.AsyncInvocationStage] (testng-InfinispanNodeFailureTest) 149A5BF7 thenAccept org.infinispan.interceptors.impl.AsyncInvocationStage@4be08fe2 = 4BE08FE2 14:17:30,605 TRACE [org.infinispan.interceptors.impl.AsyncInvocationStage] (testng-InfinispanNodeFailureTest) 4BE08FE2 thenAccept org.infinispan.interceptors.impl.AsyncInvocationStage@a9c5812 = 0A9C5812 14:17:30,605 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Attempting to execute CacheRpcCommand: PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,605 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Checking if transaction data was received for topology 5, current topology is 5 14:17:30,605 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Added a new task directly: 0 task(s) are waiting 14:17:30,605 TRACE [org.jgroups.protocols.MFC] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeA-7443 used 301 credits, 1994973 remaining 14:17:30,605 TRACE [org.infinispan.remoting.inboundhandler.NonTotalOrderTxPerCacheInboundInvocationHandler] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Calling perform() on PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5} 14:17:30,605 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute CacheRpcCommand: PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,605 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Checking if transaction data was received for topology 5, current topology is 5 14:17:30,605 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,605 TRACE [org.jgroups.protocols.MFC] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeA-7443 used 301 credits, 1994973 remaining 14:17:30,605 TRACE [org.infinispan.remoting.inboundhandler.NonTotalOrderTxPerCacheInboundInvocationHandler] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Calling perform() on PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5} 14:17:30,606 TRACE [org.infinispan.transaction.impl.TransactionTable] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Created and registered remote transaction RemoteTransaction{modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], lookedUpEntries={}, lockedKeys=[], backupKeyLocks=[], lookedUpEntriesTopology=2147483647, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, state=null} 14:17:30,606 TRACE [org.infinispan.transaction.impl.TransactionTable] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Created and registered remote transaction RemoteTransaction{modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], lookedUpEntries={}, lockedKeys=[], backupKeyLocks=[], lookedUpEntriesTopology=2147483647, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, state=null} 14:17:30,606 TRACE [org.infinispan.commands.tx.PrepareCommand] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Invoking remotely originated prepare: PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5} with invocation context: org.infinispan.context.impl.RemoteTxInvocationContext@a70e38d9 14:17:30,606 TRACE [org.infinispan.commands.tx.PrepareCommand] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Invoking remotely originated prepare: PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5} with invocation context: org.infinispan.context.impl.RemoteTxInvocationContext@a70e38d9 14:17:30,607 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Invoked with command PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5} and InvocationContext [org.infinispan.context.impl.RemoteTxInvocationContext@a70e38d9] 14:17:30,607 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Invoked with command PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5} and InvocationContext [org.infinispan.context.impl.RemoteTxInvocationContext@a70e38d9] 14:17:30,607 TRACE [org.infinispan.interceptors.impl.ComposedAsyncInvocationStage] (testng-InfinispanNodeFailureTest) Compose 7D519C57 (java.util.concurrent.CompletableFuture@4e8731ef[Not completed, 1 dependents]), result future java.util.concurrent.CompletableFuture@3281ecaa[Not completed] 14:17:30,607 TRACE [org.infinispan.interceptors.impl.ComposedAsyncInvocationStage] (testng-InfinispanNodeFailureTest) 0F88CCD9 awaiting future java.util.concurrent.CompletableFuture@3281ecaa[Not completed] 14:17:30,607 TRACE [org.infinispan.statetransfer.StateTransferInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) handleTxCommand for command PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5}, origin InfinispanNodeFailureTest-NodeA-7443 14:17:30,607 TRACE [org.infinispan.statetransfer.StateTransferInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) handleTxCommand for command PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5}, origin InfinispanNodeFailureTest-NodeA-7443 14:17:30,607 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Exists in context? null 14:17:30,607 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Exists in context? null 14:17:30,607 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Retrieved from container null 14:17:30,607 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Retrieved from container null 14:17:30,607 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Creating new entry for key MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} 14:17:30,607 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Creating new entry for key MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} 14:17:30,607 TRACE [org.infinispan.transaction.impl.RemoteTransaction] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Adding key MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} to tx GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1 14:17:30,607 TRACE [org.infinispan.transaction.impl.RemoteTransaction] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Adding key MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} to tx GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1 14:17:30,607 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Updated context entry ReadCommittedEntry(5c436c7b){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=null, isCreated=true, isChanged=false, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=null} 14:17:30,607 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Invoking: PutKeyValueCommand 14:17:30,607 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Updated context entry ReadCommittedEntry(38c203d6){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=null, isCreated=true, isChanged=false, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=null} 14:17:30,607 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Invoking: PutKeyValueCommand 14:17:30,607 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Invoking: PrepareCommand 14:17:30,607 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Invoking: PrepareCommand 14:17:30,607 TRACE [org.infinispan.interceptors.impl.EntryWrappingInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) About to commit entry ReadCommittedEntry(5c436c7b){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, isCreated=true, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}} 14:17:30,607 TRACE [org.infinispan.interceptors.impl.EntryWrappingInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) About to commit entry ReadCommittedEntry(38c203d6){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, isCreated=true, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}} 14:17:30,608 TRACE [org.infinispan.statetransfer.CommitManager] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Trying to commit. Key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}. Operation Flag=null, L1 invalidation=false 14:17:30,608 TRACE [org.infinispan.statetransfer.CommitManager] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Trying to commit. Key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}. Operation Flag=null, L1 invalidation=false 14:17:30,608 TRACE [org.infinispan.statetransfer.CommitManager] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Committing key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}. It is a L1 invalidation or a normal put and no tracking is enabled! 14:17:30,608 TRACE [org.infinispan.statetransfer.CommitManager] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Committing key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}. It is a L1 invalidation or a normal put and no tracking is enabled! 14:17:30,608 TRACE [org.infinispan.container.DefaultDataContainer] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Creating new ICE for writing. Existing=null, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, new value=0 14:17:30,608 TRACE [org.infinispan.container.DefaultDataContainer] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Creating new ICE for writing. Existing=null, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, new value=0 14:17:30,608 TRACE [org.infinispan.container.DefaultDataContainer] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Store ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0} in container 14:17:30,608 TRACE [org.infinispan.container.DefaultDataContainer] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Store ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0} in container 14:17:30,611 TRACE [org.infinispan.interceptors.impl.TxInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Verifying transaction: originatorMissing=false, alreadyCompleted=false 14:17:30,611 TRACE [org.infinispan.interceptors.impl.TxInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Verifying transaction: originatorMissing=false, alreadyCompleted=false 14:17:30,612 TRACE [org.infinispan.statetransfer.TransactionSynchronizerInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Completing tx command release future for RemoteTransaction{modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], lookedUpEntries={MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}=ReadCommittedEntry(38c203d6){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, isCreated=true, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}}}, lockedKeys=[], backupKeyLocks=[], lookedUpEntriesTopology=5, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, state=null} 14:17:30,612 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) About to send back response null for command PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5} 14:17:30,612 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) sending rsp for 12 to InfinispanNodeFailureTest-NodeA-7443 14:17:30,612 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #12, conn_id=0) 14:17:30,612 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=12, rsp_expected=true, UNICAST3: DATA, seqno=12, TP: [cluster_name=ISPN] 14:17:30,613 TRACE [org.infinispan.statetransfer.TransactionSynchronizerInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Completing tx command release future for RemoteTransaction{modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], lookedUpEntries={MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}=ReadCommittedEntry(5c436c7b){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, isCreated=true, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}}}, lockedKeys=[], backupKeyLocks=[], lookedUpEntriesTopology=5, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, state=null} 14:17:30,613 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) About to send back response null for command PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5} 14:17:30,613 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) sending rsp for 12 to InfinispanNodeFailureTest-NodeA-7443 14:17:30,613 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) InfinispanNodeFailureTest-NodeC-7981 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #9, conn_id=0) 14:17:30,613 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) InfinispanNodeFailureTest-NodeC-7981: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeC-7981, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=12, rsp_expected=true, UNICAST3: DATA, seqno=9, TP: [cluster_name=ISPN] 14:17:30,613 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: sending 1 msgs (74 bytes (370.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,613 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (74 bytes (370.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,613 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) dest=127.0.0.1:7900 (77 bytes) 14:17:30,613 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (77 bytes) 14:17:30,614 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeC-7981 (3 headers), size=2 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=12, rsp_expected=true, UNICAST3: DATA, seqno=9, TP: [cluster_name=ISPN] 14:17:30,614 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=2 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=12, rsp_expected=true, UNICAST3: DATA, seqno=12, TP: [cluster_name=ISPN] 14:17:30,614 TRACE [org.jgroups.protocols.UNICAST3] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeC-7981: #9, conn_id=0) 14:17:30,614 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #12, conn_id=0) 14:17:30,614 TRACE [org.jgroups.protocols.UNICAST3] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeC-7981#9 14:17:30,614 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#12 14:17:30,614 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Responses: Responses{ InfinispanNodeFailureTest-NodeB-62629: sender=InfinispanNodeFailureTest-NodeB-62629, received=true, suspected=false InfinispanNodeFailureTest-NodeC-7981: sender=InfinispanNodeFailureTest-NodeC-7981, received=true, suspected=false} 14:17:30,614 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Response(s) to PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5} is {InfinispanNodeFailureTest-NodeB-62629=SuccessfulResponse{responseValue=null} , InfinispanNodeFailureTest-NodeC-7981=SuccessfulResponse{responseValue=null} } 14:17:30,614 TRACE [org.infinispan.transaction.impl.LocalTransaction] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Adding remote locks on [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981]. Remote locks are null 14:17:30,615 TRACE [org.infinispan.interceptors.impl.AsyncInvocationStage] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Executing invocation handler EntryWrappingInterceptor$$Lambda$53/1706185143 with command PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5} 14:17:30,615 TRACE [org.infinispan.interceptors.impl.EntryWrappingInterceptor] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) About to commit entry ReadCommittedEntry(228651d3){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, isCreated=true, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}} 14:17:30,615 TRACE [org.infinispan.statetransfer.CommitManager] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Trying to commit. Key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}. Operation Flag=null, L1 invalidation=false 14:17:30,615 TRACE [org.infinispan.statetransfer.CommitManager] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Committing key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}. It is a L1 invalidation or a normal put and no tracking is enabled! 14:17:30,615 TRACE [org.infinispan.container.DefaultDataContainer] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Creating new ICE for writing. Existing=null, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, new value=0 14:17:30,615 TRACE [org.infinispan.container.DefaultDataContainer] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Store ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0} in container 14:17:30,615 TRACE [org.infinispan.interceptors.impl.AsyncInvocationStage] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Executing invocation handler NotificationInterceptor$1 with command PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5} 14:17:30,615 TRACE [org.infinispan.interceptors.impl.AsyncInvocationStage] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Executing invocation handler PessimisticLockingInterceptor$$Lambda$112/1559840931 with command PrepareCommand {modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName='test_cache', topologyId=5} 14:17:30,615 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultLockManager] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Release locks for keys=[MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}]. owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1 14:17:30,615 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Release lock for GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1. 14:17:30,615 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) State changed for LockPlaceHolder{lockState=RELEASED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1}. ACQUIRED => RELEASED 14:17:30,616 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Release lock for LockPlaceHolder{lockState=RELEASED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1}? true 14:17:30,616 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Try acquire. Next in queue=null. Current=LockPlaceHolder{lockState=RELEASED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1} 14:17:30,616 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Lock Owner CAS(LockPlaceHolder{lockState=RELEASED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1}, null) => true 14:17:30,616 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Try acquire. Next in queue=null. Current=null 14:17:30,616 TRACE [org.infinispan.transaction.impl.AbstractCacheTransaction] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Clearing locked keys: [MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}] 14:17:30,616 TRACE [org.infinispan.interceptors.impl.ComposedAsyncInvocationStage] (testng-InfinispanNodeFailureTest) 0F88CCD9 awaiting future java.util.concurrent.CompletableFuture@3281ecaa[Completed normally] finished with ReturnValueStage(null)=7ECA4A5B 14:17:30,616 TRACE [org.jgroups.protocols.UNICAST3] (Timer-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629 --> ACK(InfinispanNodeFailureTest-NodeC-7981: #1) 14:17:30,616 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Removed LocalXaTransaction{xid=DummyXid{id=1}} LocalTransaction{remoteLockedNodes=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], isMarkedForRollback=false, lockedKeys=[], backupKeyLocks=[], topologyId=5, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@1 from transaction table. 14:17:30,616 TRACE [org.jgroups.protocols.TCP_NIO2] (Timer-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeC-7981, src=InfinispanNodeFailureTest-NodeB-62629, headers are UNICAST3: ACK, seqno=1, conn_id=1, ts=1, TP: [cluster_name=ISPN] 14:17:30,617 TRACE [org.infinispan.transaction.impl.AbstractCacheTransaction] (testng-InfinispanNodeFailureTest) Transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1 has completed, notifying listening threads. 14:17:30,617 TRACE [org.jgroups.protocols.UNICAST3] (Timer-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629 --> ACK(InfinispanNodeFailureTest-NodeA-7443: #6) 14:17:30,617 TRACE [org.jgroups.protocols.TCP_NIO2] (Timer-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are UNICAST3: ACK, seqno=6, conn_id=1, ts=2, TP: [cluster_name=ISPN] 14:17:30,617 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 2 msgs (120 bytes (600.00% of max_bundle_size) to 2 dests(s): [ISPN:InfinispanNodeFailureTest-NodeC-7981, ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,617 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7902 (63 bytes) 14:17:30,617 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Committed in onePhase? true isOptimistic? false 14:17:30,617 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (63 bytes) 14:17:30,617 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: InfinispanNodeFailureTest-NodeC-7981, src: InfinispanNodeFailureTest-NodeB-62629 (2 headers), size=0 bytes, flags=INTERNAL], headers are UNICAST3: ACK, seqno=1, conn_id=1, ts=1, TP: [cluster_name=ISPN] 14:17:30,617 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (2 headers), size=0 bytes, flags=INTERNAL], headers are UNICAST3: ACK, seqno=6, conn_id=1, ts=2, TP: [cluster_name=ISPN] 14:17:30,617 TRACE [org.jgroups.protocols.UNICAST3] (INT-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981 <-- ACK(InfinispanNodeFailureTest-NodeB-62629: #1, conn-id=1, ts=1) 14:17:30,617 TRACE [org.jgroups.protocols.UNICAST3] (INT-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- ACK(InfinispanNodeFailureTest-NodeB-62629: #6, conn-id=1, ts=2) 14:17:30,618 TRACE [org.infinispan.transaction.impl.LocalTransaction] (testng-InfinispanNodeFailureTest) getCommitNodes recipients=null, currentTopologyId=5, members=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], txTopologyId=5 14:17:30,618 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) About to invoke tx completion notification on commitNodes: null 14:17:30,618 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443 invoking TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=-1, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName=test_cache} to recipient list null with options RpcOptions{timeout=15000, unit=MILLISECONDS, deliverOrder=NONE, responseFilter=null, responseMode=ASYNCHRONOUS} 14:17:30,618 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (testng-InfinispanNodeFailureTest) Topology id missing on command TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=-1, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName=test_cache} , setting it to 5 14:17:30,618 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) dests=null, command=TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=5, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName=test_cache} , mode=ASYNCHRONOUS, timeout=15000 14:17:30,618 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (testng-InfinispanNodeFailureTest) Replication task sending TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=5, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName=test_cache} to addresses null with response mode GET_NONE 14:17:30,618 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#12 14:17:30,618 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=12], TP: [cluster_name=ISPN] 14:17:30,618 TRACE [org.jgroups.protocols.MFC] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443 used 61 credits, 1994912 remaining 14:17:30,618 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (testng-InfinispanNodeFailureTest) Response(s) to TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=5, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName=test_cache} is {} 14:17:30,618 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (112 bytes (560.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,618 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (115 bytes) 14:17:30,618 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (115 bytes) 14:17:30,618 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=61 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=12], TP: [cluster_name=ISPN] 14:17:30,619 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#12 14:17:30,619 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=61 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=12], TP: [cluster_name=ISPN] 14:17:30,619 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#12 14:17:30,619 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received InfinispanNodeFailureTest-NodeA-7443#12 14:17:30,619 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#12 14:17:30,619 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,619 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,619 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute CacheRpcCommand: TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=5, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName=test_cache} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,619 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Attempting to execute CacheRpcCommand: TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=5, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName=test_cache} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,619 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Checking if transaction data was received for topology 5, current topology is 5 14:17:30,619 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Checking if transaction data was received for topology 5, current topology is 5 14:17:30,619 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,619 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Added a new task directly: 0 task(s) are waiting 14:17:30,619 TRACE [org.jgroups.protocols.MFC] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeA-7443 used 61 credits, 1994912 remaining 14:17:30,619 TRACE [org.jgroups.protocols.MFC] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeA-7443 used 61 credits, 1994912 remaining 14:17:30,619 TRACE [org.infinispan.remoting.inboundhandler.NonTotalOrderTxPerCacheInboundInvocationHandler] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Calling perform() on TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=5, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName=test_cache} 14:17:30,619 TRACE [org.infinispan.remoting.inboundhandler.NonTotalOrderTxPerCacheInboundInvocationHandler] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Calling perform() on TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=5, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, cacheName=test_cache} 14:17:30,619 TRACE [org.infinispan.commands.remote.recovery.TxCompletionNotificationCommand] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Processing completed transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1 14:17:30,619 TRACE [org.infinispan.commands.remote.recovery.TxCompletionNotificationCommand] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Processing completed transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1 14:17:30,619 TRACE [org.infinispan.transaction.impl.TransactionTable] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Removed remote transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1 ? RemoteTransaction{modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], lookedUpEntries={MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}=ReadCommittedEntry(38c203d6){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, isCreated=true, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}}}, lockedKeys=[], backupKeyLocks=[], lookedUpEntriesTopology=5, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, state=null} 14:17:30,619 TRACE [org.infinispan.transaction.impl.TransactionTable] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Removed remote transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1 ? RemoteTransaction{modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], lookedUpEntries={MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}=ReadCommittedEntry(5c436c7b){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, isCreated=true, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}}}, lockedKeys=[], backupKeyLocks=[], lookedUpEntriesTopology=5, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, state=null} 14:17:30,619 TRACE [org.infinispan.transaction.impl.TransactionTable] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Removed RemoteTransaction{modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], lookedUpEntries={MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}=ReadCommittedEntry(38c203d6){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, isCreated=true, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}}}, lockedKeys=[], backupKeyLocks=[], lookedUpEntriesTopology=5, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, state=null} from transaction table. 14:17:30,619 TRACE [org.infinispan.transaction.impl.AbstractCacheTransaction] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1 has completed, notifying listening threads. 14:17:30,619 TRACE [org.infinispan.transaction.impl.TransactionTable] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Removed RemoteTransaction{modifications=[PutKeyValueCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], lookedUpEntries={MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}=ReadCommittedEntry(5c436c7b){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, isCreated=true, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}}}, lockedKeys=[], backupKeyLocks=[], lookedUpEntriesTopology=5, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1, state=null} from transaction table. 14:17:30,619 TRACE [org.infinispan.transaction.impl.AbstractCacheTransaction] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1 has completed, notifying listening threads. 14:17:30,619 TRACE [org.infinispan.transaction.impl.TransactionTable] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Marking transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1 as completed 14:17:30,619 TRACE [org.infinispan.transaction.impl.TransactionTable] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Marking transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1 as completed 14:17:30,620 TRACE [org.infinispan.commands.remote.recovery.TxCompletionNotificationCommand] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Invoking forward of TxCompletionNotification for transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1. Affected keys: [] 14:17:30,620 TRACE [org.infinispan.commands.remote.recovery.TxCompletionNotificationCommand] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Invoking forward of TxCompletionNotification for transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1. Affected keys: [] 14:17:30,620 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) CommandTopologyId=5, localTopologyId=5 14:17:30,620 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) CommandTopologyId=5, localTopologyId=5 14:17:30,620 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultLockManager] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Release locks for keys=[]. owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1 14:17:30,620 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultLockManager] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Release locks for keys=[]. owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:1 14:17:30,620 TRACE [org.infinispan.tx.InfinispanNodeFailureTest] (ForkThread-1,InfinispanNodeFailureTest) Started fork callable.. 14:17:30,620 TRACE [org.infinispan.transaction.tm.DummyTransaction] (ForkThread-1,InfinispanNodeFailureTest) Created new transaction with Xid=DummyXid{id=2} 14:17:30,620 TRACE [org.infinispan.transaction.impl.TransactionTable] (ForkThread-1,InfinispanNodeFailureTest) Created a new local transaction: LocalXaTransaction{xid=null} LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, lockedKeys=[], backupKeyLocks=[], topologyId=5, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@2 14:17:30,620 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (ForkThread-1,InfinispanNodeFailureTest) Invoked with command ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_EXPECTED} and InvocationContext [org.infinispan.context.impl.LocalTxInvocationContext@74243960] 14:17:30,620 TRACE [org.infinispan.statetransfer.StateTransferInterceptor] (ForkThread-1,InfinispanNodeFailureTest) handleTxWriteCommand for command ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_EXPECTED}, origin null 14:17:30,620 TRACE [org.infinispan.transaction.tm.DummyTransaction] (ForkThread-1,InfinispanNodeFailureTest) Transaction.enlistResource(TransactionXaAdapter{localTransaction=LocalXaTransaction{xid=null} LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, lockedKeys=[], backupKeyLocks=[], topologyId=5, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@2}) invoked in transaction with Xid=DummyXid{id=2} 14:17:30,621 TRACE [org.infinispan.transaction.tm.DummyTransaction] (ForkThread-1,InfinispanNodeFailureTest) XaResource.start() invoked in transaction with Xid=DummyXid{id=2} 14:17:30,621 TRACE [org.infinispan.transaction.xa.XaTransactionTable] (ForkThread-1,InfinispanNodeFailureTest) start called on tx GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2 14:17:30,621 TRACE [org.infinispan.interceptors.locking.PessimisticLockingInterceptor] (ForkThread-1,InfinispanNodeFailureTest) acquireLocalLock 14:17:30,621 TRACE [org.infinispan.interceptors.locking.PessimisticLockingInterceptor] (ForkThread-1,InfinispanNodeFailureTest) Acquiring locks on MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}. 14:17:30,621 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultPendingLockManager] (ForkThread-1,InfinispanNodeFailureTest) Await for pending transactions for transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2 using null 14:17:30,621 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultPendingLockManager] (ForkThread-1,InfinispanNodeFailureTest) Locking key MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, no need to check for pending locks. 14:17:30,621 TRACE [org.infinispan.transaction.impl.AbstractCacheTransaction] (ForkThread-1,InfinispanNodeFailureTest) Registering locked key: MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} 14:17:30,621 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultLockManager] (ForkThread-1,InfinispanNodeFailureTest) Lock key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} for owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2. timeout=20000 (MILLISECONDS) 14:17:30,621 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (ForkThread-1,InfinispanNodeFailureTest) Acquire lock for GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2. Timeout=20000 (MILLISECONDS) 14:17:30,621 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (ForkThread-1,InfinispanNodeFailureTest) Created a new one: LockPlaceHolder{lockState=WAITING, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2} 14:17:30,621 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (ForkThread-1,InfinispanNodeFailureTest) Try acquire. Next in queue=LockPlaceHolder{lockState=WAITING, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2}. Current=null 14:17:30,621 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (ForkThread-1,InfinispanNodeFailureTest) Lock Owner CAS(null, LockPlaceHolder{lockState=WAITING, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2}) => true 14:17:30,621 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (ForkThread-1,InfinispanNodeFailureTest) State changed for LockPlaceHolder{lockState=ACQUIRED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2}. WAITING => ACQUIRED 14:17:30,621 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (ForkThread-1,InfinispanNodeFailureTest) LockPlaceHolder{lockState=ACQUIRED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2} successfully acquired the lock. 14:17:30,621 TRACE [org.infinispan.container.EntryFactoryImpl] (ForkThread-1,InfinispanNodeFailureTest) Exists in context? null 14:17:30,621 TRACE [org.infinispan.container.EntryFactoryImpl] (ForkThread-1,InfinispanNodeFailureTest) Retrieved from container ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0} 14:17:30,621 TRACE [org.infinispan.container.EntryFactoryImpl] (ForkThread-1,InfinispanNodeFailureTest) Creating new entry for key MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} 14:17:30,621 TRACE [org.infinispan.container.EntryFactoryImpl] (ForkThread-1,InfinispanNodeFailureTest) Updated context entry ReadCommittedEntry(701ba33c){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, isCreated=false, isChanged=false, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedMetadata{version=null}} 14:17:30,621 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (ForkThread-1,InfinispanNodeFailureTest) Invoking: ReplaceCommand 14:17:30,621 TRACE [org.infinispan.interceptors.impl.EntryWrappingInterceptor] (ForkThread-1,InfinispanNodeFailureTest) The return value is true 14:17:30,621 TRACE [org.infinispan.transaction.impl.LocalTransaction] (ForkThread-1,InfinispanNodeFailureTest) Adding modification ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}. Mod list is null 14:17:30,621 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (ForkThread-1,InfinispanNodeFailureTest) Invoked with command PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true} and InvocationContext [org.infinispan.context.impl.LocalTxInvocationContext@3519335b] 14:17:30,621 TRACE [org.infinispan.statetransfer.StateTransferInterceptor] (ForkThread-1,InfinispanNodeFailureTest) handleTxWriteCommand for command PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}, origin null 14:17:30,622 TRACE [org.infinispan.transaction.impl.LocalTransaction] (ForkThread-1,InfinispanNodeFailureTest) Adding remote locks on [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981]. Remote locks are null 14:17:30,622 TRACE [org.infinispan.interceptors.distribution.TxDistributionInterceptor] (ForkThread-1,InfinispanNodeFailureTest) Registered remote locks acquired null 14:17:30,622 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (ForkThread-1,InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443 invoking LockControlCommand{cache=test_cache, keys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], flags=[], unlock=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2} to recipient list null with options RpcOptions{timeout=15000, unit=MILLISECONDS, deliverOrder=NONE, responseFilter=null, responseMode=SYNCHRONOUS_IGNORE_LEAVERS} 14:17:30,622 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (ForkThread-1,InfinispanNodeFailureTest) Topology id missing on command LockControlCommand{cache=test_cache, keys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], flags=[], unlock=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2}, setting it to 5 14:17:30,622 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ForkThread-1,InfinispanNodeFailureTest) dests=null, command=LockControlCommand{cache=test_cache, keys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], flags=[], unlock=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2}, mode=SYNCHRONOUS_IGNORE_LEAVERS, timeout=15000 14:17:30,622 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (ForkThread-1,InfinispanNodeFailureTest) Replication task sending LockControlCommand{cache=test_cache, keys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], flags=[], unlock=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2} to addresses null with response mode GET_ALL 14:17:30,622 TRACE [org.jgroups.blocks.RequestCorrelator] (ForkThread-1,InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: invoking multicast RPC [req-id=13] 14:17:30,622 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (ForkThread-1,InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#13 14:17:30,622 TRACE [org.jgroups.protocols.TCP_NIO2] (ForkThread-1,InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=13, rsp_expected=true, NAKACK2: [MSG, seqno=13], TP: [cluster_name=ISPN] 14:17:30,622 TRACE [org.jgroups.protocols.MFC] (ForkThread-1,InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443 used 237 credits, 1994675 remaining 14:17:30,622 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (289 bytes (1445.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,623 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (292 bytes) 14:17:30,623 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (292 bytes) 14:17:30,623 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=237 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=13, rsp_expected=true, NAKACK2: [MSG, seqno=13], TP: [cluster_name=ISPN] 14:17:30,623 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#13 14:17:30,623 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#13 14:17:30,623 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=237 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=13, rsp_expected=true, NAKACK2: [MSG, seqno=13], TP: [cluster_name=ISPN] 14:17:30,623 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received InfinispanNodeFailureTest-NodeA-7443#13 14:17:30,623 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 13 14:17:30,623 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#13 14:17:30,623 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 13 14:17:30,623 TRACE [org.infinispan.interceptors.impl.ComposedAsyncInvocationStage] (ForkThread-1,InfinispanNodeFailureTest) Compose 55E4CA7C (java.util.concurrent.CompletableFuture@93899bb[Not completed, 1 dependents]), result future java.util.concurrent.CompletableFuture@1ffaa1f2[Not completed] 14:17:30,623 TRACE [org.infinispan.interceptors.impl.ComposedAsyncInvocationStage] (ForkThread-1,InfinispanNodeFailureTest) Compose 74536065 (java.util.concurrent.CompletableFuture@1ffaa1f2[Not completed, 1 dependents]), result future java.util.concurrent.CompletableFuture@797f3b9[Not completed] 14:17:30,624 TRACE [org.infinispan.interceptors.impl.ComposedAsyncInvocationStage] (ForkThread-1,InfinispanNodeFailureTest) Compose 4AAF47B1 (java.util.concurrent.CompletableFuture@797f3b9[Not completed, 1 dependents]), result future java.util.concurrent.CompletableFuture@909a27[Not completed] 14:17:30,624 TRACE [org.infinispan.interceptors.impl.ComposedAsyncInvocationStage] (ForkThread-1,InfinispanNodeFailureTest) 6F2F8218 awaiting future java.util.concurrent.CompletableFuture@909a27[Not completed] 14:17:30,624 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Attempting to execute CacheRpcCommand: LockControlCommand{cache=test_cache, keys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], flags=[], unlock=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,624 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute CacheRpcCommand: LockControlCommand{cache=test_cache, keys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], flags=[], unlock=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,628 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Checking if transaction data was received for topology 5, current topology is 5 14:17:30,628 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Checking if transaction data was received for topology 5, current topology is 5 14:17:30,629 TRACE [org.infinispan.transaction.impl.TransactionTable] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Created and registered remote transaction RemoteTransaction{modifications=[], lookedUpEntries={}, lockedKeys=[], backupKeyLocks=[], lookedUpEntriesTopology=2147483647, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, state=null} 14:17:30,629 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Added a new task directly: 0 task(s) are waiting 14:17:30,629 TRACE [org.jgroups.protocols.MFC] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeA-7443 used 237 credits, 1994675 remaining 14:17:30,629 TRACE [org.infinispan.remoting.inboundhandler.NonTotalOrderTxPerCacheInboundInvocationHandler] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Calling perform() on LockControlCommand{cache=test_cache, keys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], flags=[], unlock=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2} 14:17:30,629 TRACE [org.infinispan.transaction.impl.TransactionTable] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Created and registered remote transaction RemoteTransaction{modifications=[], lookedUpEntries={}, lockedKeys=[], backupKeyLocks=[], lookedUpEntriesTopology=2147483647, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, state=null} 14:17:30,630 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Invoked with command LockControlCommand{cache=test_cache, keys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], flags=[], unlock=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2} and InvocationContext [org.infinispan.context.impl.RemoteTxInvocationContext@a70e38da] 14:17:30,630 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultPendingLockManager] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Checking for pending locks and then locking key MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629} 14:17:30,630 TRACE [org.infinispan.statetransfer.StateTransferInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) handleTxCommand for command LockControlCommand{cache=test_cache, keys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], flags=[], unlock=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2}, origin InfinispanNodeFailureTest-NodeA-7443 14:17:30,630 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultPendingLockManager] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) No transactions pending for Transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2 14:17:30,630 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Invoking: LockControlCommand 14:17:30,630 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultPendingLockManager] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Stored PendingLock is NO_OP 14:17:30,630 TRACE [org.infinispan.interceptors.locking.PessimisticLockingInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Acquiring backup locks on MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}. 14:17:30,631 TRACE [org.infinispan.transaction.impl.AbstractCacheTransaction] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Registering locked key: MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629} 14:17:30,631 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultLockManager] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Lock key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629} for owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2. timeout=20000 (MILLISECONDS) 14:17:30,631 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Acquire lock for GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2. Timeout=20000 (MILLISECONDS) 14:17:30,631 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Created a new one: LockPlaceHolder{lockState=WAITING, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2} 14:17:30,631 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Try acquire. Next in queue=LockPlaceHolder{lockState=WAITING, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2}. Current=null 14:17:30,631 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Lock Owner CAS(null, LockPlaceHolder{lockState=WAITING, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2}) => true 14:17:30,631 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) State changed for LockPlaceHolder{lockState=ACQUIRED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2}. WAITING => ACQUIRED 14:17:30,631 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) LockPlaceHolder{lockState=ACQUIRED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2} successfully acquired the lock. 14:17:30,631 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,631 TRACE [org.infinispan.remoting.inboundhandler.NonTotalOrderTxPerCacheInboundInvocationHandler] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Calling perform() on LockControlCommand{cache=test_cache, keys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], flags=[], unlock=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2} 14:17:30,631 TRACE [org.infinispan.interceptors.impl.TxInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Verifying transaction: originatorMissing=false, alreadyCompleted=false 14:17:30,631 TRACE [org.jgroups.protocols.MFC] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeA-7443 used 237 credits, 1994675 remaining 14:17:30,631 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Invoked with command LockControlCommand{cache=test_cache, keys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], flags=[], unlock=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2} and InvocationContext [org.infinispan.context.impl.RemoteTxInvocationContext@a70e38da] 14:17:30,631 TRACE [org.infinispan.statetransfer.TransactionSynchronizerInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Completing tx command release future for RemoteTransaction{modifications=[], lookedUpEntries={}, lockedKeys=[], backupKeyLocks=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], lookedUpEntriesTopology=2147483647, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, state=null} 14:17:30,631 TRACE [org.infinispan.statetransfer.StateTransferInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) handleTxCommand for command LockControlCommand{cache=test_cache, keys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], flags=[], unlock=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2}, origin InfinispanNodeFailureTest-NodeA-7443 14:17:30,631 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Invoking: LockControlCommand 14:17:30,631 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) About to send back response SuccessfulResponse{responseValue=true} for command LockControlCommand{cache=test_cache, keys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], flags=[], unlock=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2} 14:17:30,631 TRACE [org.infinispan.interceptors.locking.PessimisticLockingInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Acquiring locks on MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}. 14:17:30,631 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultPendingLockManager] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Await for pending transactions for transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2 using NO_OP 14:17:30,631 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) sending rsp for 13 to InfinispanNodeFailureTest-NodeA-7443 14:17:30,631 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) InfinispanNodeFailureTest-NodeC-7981 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #10, conn_id=0) 14:17:30,631 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) InfinispanNodeFailureTest-NodeC-7981: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeC-7981, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=13, rsp_expected=true, UNICAST3: DATA, seqno=10, TP: [cluster_name=ISPN] 14:17:30,632 TRACE [org.infinispan.transaction.impl.AbstractCacheTransaction] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Registering locked key: MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629} 14:17:30,632 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultLockManager] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Lock key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629} for owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2. timeout=20000 (MILLISECONDS) 14:17:30,632 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Acquire lock for GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2. Timeout=20000 (MILLISECONDS) 14:17:30,632 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Lock owner already exists: LockPlaceHolder{lockState=ACQUIRED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2} 14:17:30,632 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: sending 1 msgs (77 bytes (385.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,632 TRACE [org.infinispan.interceptors.impl.TxInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Verifying transaction: originatorMissing=false, alreadyCompleted=false 14:17:30,632 TRACE [org.infinispan.statetransfer.TransactionSynchronizerInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Completing tx command release future for RemoteTransaction{modifications=[], lookedUpEntries={}, lockedKeys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], backupKeyLocks=[], lookedUpEntriesTopology=2147483647, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, state=null} 14:17:30,632 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) dest=127.0.0.1:7900 (80 bytes) 14:17:30,633 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeC-7981 (3 headers), size=5 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=13, rsp_expected=true, UNICAST3: DATA, seqno=10, TP: [cluster_name=ISPN] 14:17:30,633 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeC-7981: #10, conn_id=0) 14:17:30,633 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeC-7981#10 14:17:30,634 DEBUG [org.infinispan.manager.DefaultCacheManager] (testng-InfinispanNodeFailureTest) Stopping cache manager ISPN on InfinispanNodeFailureTest-NodeC-7981 14:17:30,634 TRACE [org.infinispan.manager.DefaultCacheManager] (testng-InfinispanNodeFailureTest) Cache stop order: [test_cache] 14:17:30,634 DEBUG [org.infinispan.cache.impl.CacheImpl] (testng-InfinispanNodeFailureTest) Stopping cache test_cache on InfinispanNodeFailureTest-NodeC-7981 14:17:30,635 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (testng-InfinispanNodeFailureTest) Shutting down StateTransferManager of cache test_cache on node InfinispanNodeFailureTest-NodeC-7981 14:17:30,635 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Node InfinispanNodeFailureTest-NodeC-7981 leaving cache test_cache 14:17:30,635 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) dests=[InfinispanNodeFailureTest-NodeA-7443], command=CacheTopologyControlCommand{cache=test_cache, type=LEAVE, sender=InfinispanNodeFailureTest-NodeC-7981, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2}, mode=SYNCHRONOUS, timeout=240000 14:17:30,635 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (testng-InfinispanNodeFailureTest) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=LEAVE, sender=InfinispanNodeFailureTest-NodeC-7981, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} to single recipient InfinispanNodeFailureTest-NodeA-7443 with response mode GET_ALL 14:17:30,635 TRACE [org.jgroups.blocks.RequestCorrelator] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: invoking unicast RPC [req-id=14] on InfinispanNodeFailureTest-NodeA-7443 14:17:30,635 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #11, conn_id=0) 14:17:30,635 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeC-7981, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=14, rsp_expected=true, UNICAST3: DATA, seqno=11, TP: [cluster_name=ISPN] 14:17:30,635 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: sending 1 msgs (118 bytes (590.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,635 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) dest=127.0.0.1:7900 (121 bytes) 14:17:30,636 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeC-7981 (3 headers), size=46 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=14, rsp_expected=true, UNICAST3: DATA, seqno=11, TP: [cluster_name=ISPN] 14:17:30,636 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeC-7981: #11, conn_id=0) 14:17:30,636 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeC-7981#11 14:17:30,636 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 14 14:17:30,636 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=LEAVE, sender=InfinispanNodeFailureTest-NodeC-7981, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeC-7981] 14:17:30,636 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,636 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Removed node InfinispanNodeFailureTest-NodeC-7981 from cache test_cache: members = [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], joiners = [] 14:17:30,637 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Cache test_cache topology updated: CacheTopology{id=6, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]}, members = [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], joiners = [] 14:17:30,637 TRACE [org.infinispan.topology.CacheTopology] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Current consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,637 DEBUG [org.infinispan.topology.ClusterTopologyManagerImpl] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Updating cluster-wide current topology for cache test_cache, topology = CacheTopology{id=6, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]}, availability mode = AVAILABLE 14:17:30,637 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) dests=null, command=CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=6, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2}, mode=ASYNCHRONOUS, timeout=240000 14:17:30,637 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Attempting to execute command on self: CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=6, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} 14:17:30,637 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=6, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} to addresses null with response mode GET_NONE 14:17:30,637 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Waiting on view 2 being accepted 14:17:30,637 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Updating local topology for cache test_cache: CacheTopology{id=6, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,637 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Current consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,637 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Installing new cache topology CacheTopology{id=6, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} on cache test_cache 14:17:30,637 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Received new topology for cache test_cache, isRebalance = false, isMember = true, topology = CacheTopology{id=6, rebalanceId=3, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,637 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Signalling topology 6 is installed 14:17:30,637 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#14 14:17:30,637 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) On cache test_cache we have: new segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; old segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] 14:17:30,637 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) On cache test_cache we have: added segments: []; removed segments: [] 14:17:30,637 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=14], TP: [cluster_name=ISPN] 14:17:30,637 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Topology update processed, stateTransferTopologyId = -1, startRebalance = false, pending CH = null 14:17:30,637 TRACE [org.jgroups.protocols.MFC] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443 used 573 credits, 1994102 remaining 14:17:30,638 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Signalling transaction data received for topology 6 14:17:30,638 DEBUG [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Queueing rebalance for cache test_cache with members [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,638 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Rebalancing consistent hash for cache test_cache, members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,638 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 0 14:17:30,638 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,638 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (624 bytes (3120.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,638 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (627 bytes) 14:17:30,638 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Removing no longer owned entries for cache test_cache 14:17:30,638 DEBUG [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Topology changed, recalculating minTopologyId 14:17:30,638 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (627 bytes) 14:17:30,638 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Minimum topology ID still is 5; nothing to change 14:17:30,638 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Updating cache test_cache topology for rebalance: CacheTopology{id=7, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,638 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=573 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=14], TP: [cluster_name=ISPN] 14:17:30,638 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=573 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=14], TP: [cluster_name=ISPN] 14:17:30,638 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#14 14:17:30,638 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received InfinispanNodeFailureTest-NodeA-7443#14 14:17:30,638 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Cache test_cache topology updated: CacheTopology{id=7, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]}, members = [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], joiners = [] 14:17:30,638 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#14 14:17:30,638 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#14 14:17:30,638 TRACE [org.infinispan.topology.CacheTopology] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Current consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,638 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,638 TRACE [org.infinispan.topology.CacheTopology] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Pending consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,638 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,638 TRACE [org.infinispan.topology.RebalanceConfirmationCollector] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Initialized rebalance confirmation collector 7@test_cache, initial list is [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,638 INFO [org.infinispan.CLUSTER] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) ISPN000310: Starting cluster-wide rebalance for cache test_cache, topology CacheTopology{id=7, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,639 INFO [org.infinispan.CLUSTER] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) [Context=test_cache][Scope=InfinispanNodeFailureTest-NodeA-7443]ISPN100002: Started local rebalance 14:17:30,639 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) dests=null, command=CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_START, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=7, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2}, mode=ASYNCHRONOUS, timeout=240000 14:17:30,639 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t5) Attempting to execute command on self: CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_START, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=7, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} 14:17:30,639 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_START, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=7, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} to addresses null with response mode GET_NONE 14:17:30,639 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=6, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,639 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Added a new task directly: 0 task(s) are waiting 14:17:30,639 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Waiting on view 2 being accepted 14:17:30,639 TRACE [org.jgroups.protocols.MFC] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeA-7443 used 573 credits, 1994102 remaining 14:17:30,639 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Updating local topology for cache test_cache: CacheTopology{id=7, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,639 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Starting local rebalance for cache test_cache, topology = CacheTopology{id=7, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,639 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Ignoring consistent hash update 6 for cache test_cache that doesn't exist locally 14:17:30,639 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Current consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,639 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Pending consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,639 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Installing new cache topology CacheTopology{id=7, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} on cache test_cache 14:17:30,639 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=6, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,639 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Received new topology for cache test_cache, isRebalance = true, isMember = true, topology = CacheTopology{id=7, rebalanceId=4, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,639 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#15 14:17:30,639 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Start keeping track of keys for rebalance 14:17:30,639 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,639 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=15], TP: [cluster_name=ISPN] 14:17:30,639 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Tracking is disabled. Clear tracker: {} 14:17:30,639 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Set track to PUT_FOR_STATE_TRANSFER = true 14:17:30,639 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Signalling topology 7 is installed 14:17:30,639 TRACE [org.jgroups.protocols.MFC] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443 used 878 credits, 1993224 remaining 14:17:30,639 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Waiting on view 2 being accepted 14:17:30,639 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) On cache test_cache we have: new segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; old segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] 14:17:30,639 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Updating local topology for cache test_cache: CacheTopology{id=6, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,639 TRACE [org.infinispan.topology.RebalanceConfirmationCollector] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Rebalance confirmation collector 7@test_cache members list updated, remaining list is [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,639 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) On cache test_cache we have: added segments: []; removed segments: [] 14:17:30,639 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Current consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,639 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) About to send back response SuccessfulResponse{responseValue=null} for command CacheTopologyControlCommand{cache=test_cache, type=LEAVE, sender=InfinispanNodeFailureTest-NodeC-7981, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} 14:17:30,639 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Topology update processed, stateTransferTopologyId = 7, startRebalance = true, pending CH = PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]} 14:17:30,639 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Installing new cache topology CacheTopology{id=6, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} on cache test_cache 14:17:30,639 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Signalling transaction data received for topology 7 14:17:30,639 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) sending rsp for 14 to InfinispanNodeFailureTest-NodeC-7981 14:17:30,639 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (929 bytes (4645.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,639 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Finished receiving of segments for cache test_cache for topology 7. 14:17:30,639 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Stop keeping track of changed keys for state transfer 14:17:30,639 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeC-7981: #6, conn_id=2) 14:17:30,639 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,639 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Received new topology for cache test_cache, isRebalance = false, isMember = true, topology = CacheTopology{id=6, rebalanceId=3, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,639 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Tracking is disabled. Clear tracker: {} 14:17:30,639 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (932 bytes) 14:17:30,639 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeC-7981, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=14, rsp_expected=true, UNICAST3: DATA, seqno=6, conn_id=2, TP: [cluster_name=ISPN] 14:17:30,639 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Signalling topology 6 is installed 14:17:30,640 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 0 14:17:30,640 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Attempting to execute command on self: CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_CONFIRM, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=7, rebalanceId=4, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} 14:17:30,640 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,640 DEBUG [org.infinispan.CLUSTER] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) ISPN000328: Finished local rebalance for cache test_cache on node InfinispanNodeFailureTest-NodeA-7443, topology id = 7 14:17:30,640 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) On cache test_cache we have: new segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; old segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] 14:17:30,640 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Removing no longer owned entries for cache test_cache 14:17:30,640 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) On cache test_cache we have: added segments: []; removed segments: [] 14:17:30,640 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (932 bytes) 14:17:30,640 DEBUG [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Topology changed, recalculating minTopologyId 14:17:30,640 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Topology update processed, stateTransferTopologyId = -1, startRebalance = false, pending CH = null 14:17:30,640 INFO [org.infinispan.CLUSTER] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) [Context=test_cache][Scope=InfinispanNodeFailureTest-NodeA-7443]ISPN100003: Finished local rebalance 14:17:30,639 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,640 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Minimum topology ID still is 5; nothing to change 14:17:30,640 TRACE [org.jgroups.protocols.MFC] (OOB-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeA-7443 used 573 credits, 1994102 remaining 14:17:30,640 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=878 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=15], TP: [cluster_name=ISPN] 14:17:30,640 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#15 14:17:30,640 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#15 14:17:30,640 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,640 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (76 bytes (380.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeC-7981] 14:17:30,640 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (79 bytes) 14:17:30,640 TRACE [org.infinispan.topology.RebalanceConfirmationCollector] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Rebalance confirmation collector 7@test_cache received confirmation for InfinispanNodeFailureTest-NodeA-7443, remaining list is [InfinispanNodeFailureTest-NodeB-62629] 14:17:30,640 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_START, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=7, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,640 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,640 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: InfinispanNodeFailureTest-NodeC-7981, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=4 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=14, rsp_expected=true, UNICAST3: DATA, seqno=6, conn_id=2, TP: [cluster_name=ISPN] 14:17:30,640 TRACE [org.jgroups.protocols.MFC] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeA-7443 used 878 credits, 1993224 remaining 14:17:30,640 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #6, conn_id=2) 14:17:30,641 TRACE [org.jgroups.protocols.UNICAST3] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#6 14:17:30,641 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Responses: sender=InfinispanNodeFailureTest-NodeA-7443value=SuccessfulResponse{responseValue=null} , received=true, suspected=false 14:17:30,641 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (testng-InfinispanNodeFailureTest) Shutting down StateConsumer of cache test_cache on node InfinispanNodeFailureTest-NodeC-7981 14:17:30,641 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Signalling transaction data received for topology 6 14:17:30,641 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 1 14:17:30,641 TRACE [org.infinispan.statetransfer.StateProviderImpl] (testng-InfinispanNodeFailureTest) Shutting down StateProvider of cache test_cache on node InfinispanNodeFailureTest-NodeC-7981 14:17:30,641 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Checking transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2 14:17:30,641 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,641 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Removing no longer owned entries for cache test_cache 14:17:30,641 DEBUG [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Topology changed, recalculating minTopologyId 14:17:30,641 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Minimum topology ID still is 5; nothing to change 14:17:30,641 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Waiting on view 2 being accepted 14:17:30,641 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Updating local topology for cache test_cache: CacheTopology{id=7, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,641 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Starting local rebalance for cache test_cache, topology = CacheTopology{id=7, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,641 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Current consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,641 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Pending consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,641 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Installing new cache topology CacheTopology{id=7, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} on cache test_cache 14:17:30,641 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Received new topology for cache test_cache, isRebalance = true, isMember = true, topology = CacheTopology{id=7, rebalanceId=4, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, unionCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,641 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Start keeping track of keys for rebalance 14:17:30,641 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,641 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Tracking is disabled. Clear tracker: {} 14:17:30,641 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Set track to PUT_FOR_STATE_TRANSFER = true 14:17:30,641 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Signalling topology 7 is installed 14:17:30,641 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) On cache test_cache we have: new segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; old segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] 14:17:30,642 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) On cache test_cache we have: added segments: []; removed segments: [] 14:17:30,642 TRACE [org.infinispan.jmx.ComponentsJmxRegistration] (testng-InfinispanNodeFailureTest) Unregistering jmx resources.. 14:17:30,642 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Topology update processed, stateTransferTopologyId = 7, startRebalance = true, pending CH = PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]} 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksHeld [r=true,w=false,is=false,type=int] 14:17:30,642 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Signalling transaction data received for topology 7 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksAvailable [r=true,w=false,is=false,type=int] 14:17:30,642 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Finished receiving of segments for cache test_cache for topology 7. 14:17:30,642 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Stop keeping track of changed keys for state transfer 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute concurrencyLevel [r=true,w=false,is=false,type=int] 14:17:30,642 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute passivations [r=true,w=false,is=false,type=long] 14:17:30,642 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Tracking is disabled. Clear tracker: {} 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,642 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) dests=[InfinispanNodeFailureTest-NodeA-7443], command=CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_CONFIRM, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=null, topologyId=7, rebalanceId=4, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2}, mode=ASYNCHRONOUS, timeout=0 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksAvailable [r=true,w=false,is=false,type=int] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheLoaderMisses [r=true,w=false,is=false,type=long] 14:17:30,642 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_CONFIRM, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=null, topologyId=7, rebalanceId=4, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} to single recipient InfinispanNodeFailureTest-NodeA-7443 with response mode GET_NONE 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfEntries [r=true,w=false,is=false,type=int] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageWriteTime [r=true,w=false,is=false,type=long] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute storeWrites [r=true,w=false,is=false,type=long] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReadTime [r=true,w=false,is=false,type=long] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeMisses [r=true,w=false,is=false,type=long] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute misses [r=true,w=false,is=false,type=long] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hits [r=true,w=false,is=false,type=long] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksHeld [r=true,w=false,is=false,type=int] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute readWriteRatio [r=true,w=false,is=false,type=double] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute activations [r=true,w=false,is=false,type=long] 14:17:30,642 TRACE [org.jgroups.protocols.UNICAST3] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #13, conn_id=0) 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stores [r=true,w=false,is=false,type=long] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheLoaderLoads [r=true,w=false,is=false,type=long] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute invalidations [r=true,w=false,is=false,type=long] 14:17:30,642 TRACE [org.jgroups.protocols.TCP_NIO2] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, UNICAST3: DATA, seqno=13, TP: [cluster_name=ISPN] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute passivations [r=true,w=false,is=false,type=long] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictions [r=true,w=false,is=false,type=long] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceStart [r=true,w=false,is=false,type=long] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hitRatio [r=true,w=false,is=false,type=double] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageRemoveTime [r=true,w=false,is=false,type=long] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceReset [r=true,w=false,is=false,type=long] 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeHits [r=true,w=false,is=false,type=long] 14:17:30,642 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (122 bytes (610.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,642 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (125 bytes) 14:17:30,642 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 1 14:17:30,642 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,643 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Checking transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2 14:17:30,643 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void setStaleStatsTreshold 14:17:30,643 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Removing no longer owned entries for cache test_cache 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictionSize [r=true,w=true,is=false,type=long] 14:17:30,643 DEBUG [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Topology changed, recalculating minTopologyId 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute activations [r=true,w=false,is=false,type=java.lang.String] 14:17:30,643 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t2) Minimum topology ID still is 5; nothing to change 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=false,type=boolean] 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void recordKnownGlobalKeyset 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation long synchronizeData 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation long synchronizeData 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void disconnectSource 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rollbacks [r=true,w=false,is=false,type=long] 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute prepares [r=true,w=false,is=false,type=long] 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute commits [r=true,w=false,is=false,type=long] 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute replicationFailures [r=true,w=false,is=false,type=long] 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute pendingViewAsString [r=true,w=false,is=false,type=java.lang.String] 14:17:30,643 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=51 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, UNICAST3: DATA, seqno=13, TP: [cluster_name=ISPN] 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute successRatio [r=true,w=false,is=false,type=java.lang.String] 14:17:30,643 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #13, conn_id=0) 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReplicationTime [r=true,w=false,is=false,type=long] 14:17:30,643 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#13 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute committedViewAsString [r=true,w=false,is=false,type=java.lang.String] 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute successRatioFloatingPoint [r=true,w=false,is=false,type=double] 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute replicationCount [r=true,w=false,is=false,type=long] 14:17:30,643 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void setStatisticsEnabled 14:17:30,643 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=878 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=15], TP: [cluster_name=ISPN] 14:17:30,643 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received InfinispanNodeFailureTest-NodeA-7443#15 14:17:30,643 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_CONFIRM, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=null, topologyId=7, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeB-62629] 14:17:30,643 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#15 14:17:30,643 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,643 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,643 DEBUG [org.infinispan.CLUSTER] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) ISPN000328: Finished local rebalance for cache test_cache on node InfinispanNodeFailureTest-NodeB-62629, topology id = 7 14:17:30,643 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceReset [r=true,w=false,is=false,type=long] 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute readWriteRatio [r=true,w=false,is=false,type=double] 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeHits [r=true,w=false,is=false,type=long] 14:17:30,644 INFO [org.infinispan.CLUSTER] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) [Context=test_cache][Scope=InfinispanNodeFailureTest-NodeB-62629]ISPN100003: Finished local rebalance 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictions [r=true,w=false,is=false,type=long] 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceStart [r=true,w=false,is=false,type=long] 14:17:30,644 TRACE [org.infinispan.topology.RebalanceConfirmationCollector] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Rebalance confirmation collector 7@test_cache received confirmation for InfinispanNodeFailureTest-NodeB-62629, remaining list is [] 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute misses [r=true,w=false,is=false,type=long] 14:17:30,644 INFO [org.infinispan.CLUSTER] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) ISPN000336: Finished cluster-wide rebalance for cache test_cache, topology id = 7 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeMisses [r=true,w=false,is=false,type=long] 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stores [r=true,w=false,is=false,type=long] 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfEntries [r=true,w=false,is=false,type=int] 14:17:30,644 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Cache test_cache topology updated: CacheTopology{id=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]}, members = [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], joiners = [] 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute elapsedTime [r=true,w=false,is=false,type=long] 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hitRatio [r=true,w=false,is=false,type=double] 14:17:30,644 TRACE [org.infinispan.topology.CacheTopology] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Current consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,644 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=REBALANCE_START, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=7, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageWriteTime [r=true,w=false,is=false,type=long] 14:17:30,644 DEBUG [org.infinispan.topology.ClusterTopologyManagerImpl] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Updating cluster-wide current topology for cache test_cache, topology = CacheTopology{id=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]}, availability mode = AVAILABLE 14:17:30,644 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Added a new task directly: 0 task(s) are waiting 14:17:30,644 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Ignoring rebalance 7 for cache test_cache that doesn't exist locally 14:17:30,644 TRACE [org.jgroups.protocols.MFC] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeA-7443 used 878 credits, 1993224 remaining 14:17:30,644 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) dests=null, command=CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2}, mode=ASYNCHRONOUS, timeout=240000 14:17:30,644 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Attempting to execute command on self: CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} 14:17:30,644 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} to addresses null with response mode GET_NONE 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReadTime [r=true,w=false,is=false,type=long] 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=false,type=boolean] 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hits [r=true,w=false,is=false,type=long] 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageRemoveTime [r=true,w=false,is=false,type=long] 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,644 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Waiting on view 2 being accepted 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation boolean isLocatedLocally 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.util.List locateKey 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation boolean isAffectedByRehash 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stateTransferInProgress [r=true,w=false,is=true,type=boolean] 14:17:30,644 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Updating local topology for cache test_cache: CacheTopology{id=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute joinComplete [r=true,w=false,is=true,type=boolean] 14:17:30,644 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rebalancingStatus [r=true,w=false,is=false,type=java.lang.String] 14:17:30,644 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Current consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,644 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Installing new cache topology CacheTopology{id=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} on cache test_cache 14:17:30,644 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Received new topology for cache test_cache, isRebalance = false, isMember = true, topology = CacheTopology{id=8, rebalanceId=4, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,644 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Signalling topology 8 is installed 14:17:30,644 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) On cache test_cache we have: new segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; old segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] 14:17:30,644 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) On cache test_cache we have: added segments: []; removed segments: [] 14:17:30,644 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Topology update processed, stateTransferTopologyId = 7, startRebalance = false, pending CH = null 14:17:30,644 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Stop keeping track of changed keys for state transfer 14:17:30,644 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,644 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Tracking is disabled. Clear tracker: {} 14:17:30,645 TRACE [org.infinispan.stream.impl.LocalStreamManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Data rehash occurred startHash: PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]} and endHash: PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]} with new topology 8 and was pre false 14:17:30,645 TRACE [org.infinispan.stream.impl.LocalStreamManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) No change listeners present! 14:17:30,645 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Unlock State Transfer in Progress for topology ID 8 14:17:30,645 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Signalling transaction data received for topology 8 14:17:30,645 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#16 14:17:30,645 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 0 14:17:30,645 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,645 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=16], TP: [cluster_name=ISPN] 14:17:30,645 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Removing no longer owned entries for cache test_cache 14:17:30,645 TRACE [org.jgroups.protocols.MFC] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443 used 573 credits, 1992651 remaining 14:17:30,645 DEBUG [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Topology changed, recalculating minTopologyId 14:17:30,645 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Updating stable topology for cache test_cache: CacheTopology{id=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,645 DEBUG [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Wait for on-going transactions to finish for 0 milliseconds. 14:17:30,645 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t2) Minimum topology ID still is 5; nothing to change 14:17:30,645 DEBUG [org.infinispan.topology.ClusterTopologyManagerImpl] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Updating cluster-wide stable topology for cache test_cache, topology = CacheTopology{id=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,645 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (624 bytes (3120.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,645 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) dests=null, command=CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2}, mode=ASYNCHRONOUS, timeout=240000 14:17:30,645 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (627 bytes) 14:17:30,645 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t3) Attempting to execute command on self: CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} 14:17:30,645 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} to addresses null with response mode GET_NONE 14:17:30,645 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t5) Updating stable topology for cache test_cache: CacheTopology{id=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,645 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (627 bytes) 14:17:30,645 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#17 14:17:30,645 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=17], TP: [cluster_name=ISPN] 14:17:30,645 TRACE [org.jgroups.protocols.MFC] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443 used 572 credits, 1992079 remaining 14:17:30,646 WARN [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) ISPN000100: Stopping, but there are 0 local transactions and 1 remote transactions that did not finish in time. 14:17:30,646 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=573 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=16], TP: [cluster_name=ISPN] 14:17:30,646 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#16 14:17:30,646 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#16 14:17:30,646 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,646 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,646 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,646 TRACE [org.jgroups.protocols.MFC] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeA-7443 used 573 credits, 1992651 remaining 14:17:30,646 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) Waiting on view 2 being accepted 14:17:30,646 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (623 bytes (3115.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,646 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) Updating local topology for cache test_cache: CacheTopology{id=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,646 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) Current consistent hash's routing table: [0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0] 14:17:30,646 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (626 bytes) 14:17:30,646 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) Installing new cache topology CacheTopology{id=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} on cache test_cache 14:17:30,646 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) Received new topology for cache test_cache, isRebalance = false, isMember = true, topology = CacheTopology{id=8, rebalanceId=4, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,647 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (626 bytes) 14:17:30,647 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) Signalling topology 8 is installed 14:17:30,647 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) On cache test_cache we have: new segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; old segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] 14:17:30,647 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) On cache test_cache we have: added segments: []; removed segments: [] 14:17:30,647 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) Topology update processed, stateTransferTopologyId = 7, startRebalance = false, pending CH = null 14:17:30,647 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) Stop keeping track of changed keys for state transfer 14:17:30,647 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) Set track to PUT_FOR_STATE_TRANSFER = false 14:17:30,647 TRACE [org.infinispan.statetransfer.CommitManager] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) Tracking is disabled. Clear tracker: {} 14:17:30,647 TRACE [org.infinispan.stream.impl.LocalStreamManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) Data rehash occurred startHash: PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 30, InfinispanNodeFailureTest-NodeB-62629: 30]} and endHash: PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]} with new topology 8 and was pre false 14:17:30,647 TRACE [org.infinispan.stream.impl.LocalStreamManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) No change listeners present! 14:17:30,647 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) Unlock State Transfer in Progress for topology ID 8 14:17:30,647 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) Signalling transaction data received for topology 8 14:17:30,647 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], remote transactions: 1 14:17:30,647 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) Checking transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2 14:17:30,647 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,647 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) Removing no longer owned entries for cache test_cache 14:17:30,647 DEBUG [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) Topology changed, recalculating minTopologyId 14:17:30,647 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t3) Minimum topology ID still is 5; nothing to change 14:17:30,647 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=572 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=17], TP: [cluster_name=ISPN] 14:17:30,648 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received InfinispanNodeFailureTest-NodeA-7443#17 14:17:30,648 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#17 14:17:30,648 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,648 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,648 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) Added a new task directly: 0 task(s) are waiting 14:17:30,648 TRACE [org.jgroups.protocols.MFC] (OOB-1,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeA-7443 used 572 credits, 1992652 remaining 14:17:30,648 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=572 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=17], TP: [cluster_name=ISPN] 14:17:30,648 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#17 14:17:30,648 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#17 14:17:30,648 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Unfinished local transactions: [] 14:17:30,648 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,649 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Unfinished remote transactions: [GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2] 14:17:30,649 TRACE [org.infinispan.container.DefaultDataContainer] (testng-InfinispanNodeFailureTest) Clearing data container 14:17:30,649 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=573 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, NAKACK2: [MSG, seqno=16], TP: [cluster_name=ISPN] 14:17:30,649 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=STABLE_TOPOLOGY_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,649 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received InfinispanNodeFailureTest-NodeA-7443#16 14:17:30,649 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,649 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: delivering InfinispanNodeFailureTest-NodeA-7443#16 14:17:30,649 TRACE [org.jgroups.protocols.MFC] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeA-7443 used 572 credits, 1992079 remaining 14:17:30,649 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,649 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t4) Updating stable topology for cache test_cache: CacheTopology{id=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[ae38fac5-4aa9-40ca-8897-ff2ec8e65091, a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,649 DEBUG [org.infinispan.xsite.BackupReceiverRepositoryImpl] (testng-InfinispanNodeFailureTest) Processing cache stop: EventImpl{type=CACHE_STOPPED, newMembers=null, oldMembers=null, localAddress=null, viewId=0, subgroupsMerged=null, mergeView=false}. Cache name: 'test_cache' 14:17:30,649 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=8, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (2)[InfinispanNodeFailureTest-NodeA-7443: 29, InfinispanNodeFailureTest-NodeB-62629: 31]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=2} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,649 TRACE [org.infinispan.jmx.ComponentsJmxRegistration] (testng-InfinispanNodeFailureTest) Unregistering jmx resources.. 14:17:30,649 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterAvailability [r=true,w=false,is=false,type=java.lang.String] 14:17:30,649 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rebalancingEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,649 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (remote-thread-InfinispanNodeFailureTest-NodeC-p10-t5) Ignoring consistent hash update 8 for cache test_cache that doesn't exist locally 14:17:30,649 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictions [r=true,w=false,is=false,type=long] 14:17:30,649 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeMisses [r=true,w=false,is=false,type=long] 14:17:30,649 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceReset [r=true,w=false,is=false,type=long] 14:17:30,649 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,649 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) Added a new task directly: 0 task(s) are waiting 14:17:30,649 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hits [r=true,w=false,is=false,type=long] 14:17:30,649 TRACE [org.jgroups.protocols.MFC] (OOB-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeA-7443 used 573 credits, 1992079 remaining 14:17:30,649 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute misses [r=true,w=false,is=false,type=long] 14:17:30,649 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeHits [r=true,w=false,is=false,type=long] 14:17:30,649 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReadTime [r=true,w=false,is=false,type=long] 14:17:30,649 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageRemoveTime [r=true,w=false,is=false,type=long] 14:17:30,649 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageWriteTime [r=true,w=false,is=false,type=long] 14:17:30,649 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfEntries [r=true,w=false,is=false,type=int] 14:17:30,649 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stores [r=true,w=false,is=false,type=long] 14:17:30,649 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hitRatio [r=true,w=false,is=false,type=double] 14:17:30,649 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute readWriteRatio [r=true,w=false,is=false,type=double] 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute physicalAddresses [r=true,w=false,is=false,type=java.lang.String] 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheManagerStatus [r=true,w=false,is=false,type=java.lang.String] 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterSize [r=true,w=false,is=false,type=int] 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute runningCacheCount [r=true,w=false,is=false,type=java.lang.String] 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute globalConfigurationAsProperties [r=true,w=false,is=false,type=java.util.Properties] 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute coordinator [r=true,w=false,is=true,type=boolean] 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute definedCacheNames [r=true,w=false,is=false,type=java.lang.String] 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute name [r=true,w=false,is=false,type=java.lang.String] 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute createdCacheCount [r=true,w=false,is=false,type=java.lang.String] 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute definedCacheCount [r=true,w=false,is=false,type=java.lang.String] 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute nodeAddress [r=true,w=false,is=false,type=java.lang.String] 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterName [r=true,w=false,is=false,type=java.lang.String] 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterMembers [r=true,w=false,is=false,type=java.lang.String] 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute definedCacheConfigurationNames [r=true,w=false,is=false,type=java.lang.String] 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute coordinatorAddress [r=true,w=false,is=false,type=java.lang.String] 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute version [r=true,w=false,is=false,type=java.lang.String] 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void startCache 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void startCache 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String pushState 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String bringSiteOnline 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String cancelPushState 14:17:30,650 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String takeSiteOffline 14:17:30,650 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=CacheManager,name="DefaultCacheManager",component=LocalTopologyManager 14:17:30,650 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=CacheManager,name="DefaultCacheManager",component=CacheContainerStats 14:17:30,650 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=CacheManager,name="DefaultCacheManager",component=CacheManager 14:17:30,650 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4eaf6486-788d-437b-b3d7-12a74d23edd0:type=CacheManager,name="DefaultCacheManager",component=GlobalXSiteAdminOperations 14:17:30,651 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Stopping LocalTopologyManager on InfinispanNodeFailureTest-NodeC-7981 14:17:30,651 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) ISPN000080: Disconnecting JGroups channel ISPN 14:17:30,652 TRACE [org.jgroups.protocols.pbcast.GMS] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: sending LEAVE request to InfinispanNodeFailureTest-NodeA-7443 14:17:30,652 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #12, conn_id=0) 14:17:30,652 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeC-7981, headers are GMS: GmsHeader[LEAVE_REQ]: mbr=InfinispanNodeFailureTest-NodeC-7981, UNICAST3: DATA, seqno=12, TP: [cluster_name=ISPN] 14:17:30,652 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: sending 1 msgs (83 bytes (415.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,652 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) dest=127.0.0.1:7900 (86 bytes) 14:17:30,652 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeC-7981 (3 headers), size=0 bytes, flags=OOB], headers are GMS: GmsHeader[LEAVE_REQ]: mbr=InfinispanNodeFailureTest-NodeC-7981, UNICAST3: DATA, seqno=12, TP: [cluster_name=ISPN] 14:17:30,652 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeC-7981: #12, conn_id=0) 14:17:30,652 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeC-7981#12 14:17:30,653 TRACE [org.jgroups.protocols.pbcast.GMS] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: joiners=[], suspected=[], leaving=[InfinispanNodeFailureTest-NodeC-7981], new view: [InfinispanNodeFailureTest-NodeA-7443|3] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,653 TRACE [org.jgroups.protocols.pbcast.GMS] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending LEAVE response to InfinispanNodeFailureTest-NodeC-7981 14:17:30,653 TRACE [org.jgroups.protocols.TCP_NIO2] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeC-7981, src=InfinispanNodeFailureTest-NodeA-7443, headers are GMS: GmsHeader[LEAVE_RSP], TP: [cluster_name=ISPN] 14:17:30,653 TRACE [org.jgroups.protocols.pbcast.GMS] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: mcasting view [InfinispanNodeFailureTest-NodeA-7443|3] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] (2 mbrs) 14:17:30,653 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (57 bytes (285.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeC-7981] 14:17:30,653 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (60 bytes) 14:17:30,653 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#18 14:17:30,653 TRACE [org.jgroups.protocols.TCP_NIO2] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are GMS: GmsHeader[VIEW], NAKACK2: [MSG, seqno=18], TP: [cluster_name=ISPN] 14:17:30,653 TRACE [org.jgroups.protocols.TCP_NIO2] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: looping back message [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=61 bytes] 14:17:30,654 TRACE [org.jgroups.protocols.TCP_NIO2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=61 bytes], headers are GMS: GmsHeader[VIEW], NAKACK2: [MSG, seqno=18], TP: [cluster_name=ISPN] 14:17:30,654 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received InfinispanNodeFailureTest-NodeA-7443#18 14:17:30,654 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (112 bytes (560.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,654 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeA-7443#18-18 (1 messages) 14:17:30,654 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (115 bytes) 14:17:30,654 TRACE [org.jgroups.protocols.TCP_NIO2] (NioConnection.Reader [127.0.0.1:7900],InfinispanNodeFailureTest-NodeC-7981) 127.0.0.1:7902: removed connection to 127.0.0.1:7900 14:17:30,654 TRACE [org.jgroups.protocols.pbcast.GMS] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received delta view [InfinispanNodeFailureTest-NodeA-7443|3], ref-view=[InfinispanNodeFailureTest-NodeA-7443|2], left=[InfinispanNodeFailureTest-NodeC-7981] 14:17:30,654 DEBUG [org.jgroups.protocols.pbcast.GMS] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: installing view [InfinispanNodeFailureTest-NodeA-7443|3] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,654 TRACE [org.jgroups.protocols.UNICAST3] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: closing connections of non members [InfinispanNodeFailureTest-NodeC-7981] 14:17:30,654 TRACE [org.jgroups.protocols.TCP_NIO2] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=61 bytes], headers are GMS: GmsHeader[VIEW], NAKACK2: [MSG, seqno=18], TP: [cluster_name=ISPN] 14:17:30,654 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#18 14:17:30,654 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-2,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: received [dst: InfinispanNodeFailureTest-NodeC-7981, src: InfinispanNodeFailureTest-NodeA-7443 (2 headers), size=0 bytes, flags=OOB|NO_RELIABILITY|INTERNAL], headers are GMS: GmsHeader[LEAVE_RSP], TP: [cluster_name=ISPN] 14:17:30,654 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#18-18 (1 messages) 14:17:30,654 DEBUG [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: removed InfinispanNodeFailureTest-NodeC-7981 from xmit_table (not member anymore) 14:17:30,654 TRACE [org.jgroups.protocols.pbcast.GMS] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received delta view [InfinispanNodeFailureTest-NodeA-7443|3], ref-view=[InfinispanNodeFailureTest-NodeA-7443|2], left=[InfinispanNodeFailureTest-NodeC-7981] 14:17:30,654 DEBUG [org.jgroups.protocols.pbcast.GMS] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: installing view [InfinispanNodeFailureTest-NodeA-7443|3] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,654 TRACE [org.jgroups.protocols.pbcast.STABLE] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: reset digest to InfinispanNodeFailureTest-NodeA-7443: [-1], InfinispanNodeFailureTest-NodeB-62629: [-1] 14:17:30,654 TRACE [org.jgroups.protocols.UNICAST3] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: closing connections of non members [InfinispanNodeFailureTest-NodeC-7981] 14:17:30,654 TRACE [org.jgroups.protocols.tom.TOA] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) Handle view [InfinispanNodeFailureTest-NodeA-7443|3] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,654 DEBUG [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: removed InfinispanNodeFailureTest-NodeC-7981 from xmit_table (not member anymore) 14:17:30,654 TRACE [org.jgroups.protocols.MFC] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) new membership: [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,654 TRACE [org.jgroups.protocols.pbcast.STABLE] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: reset digest to InfinispanNodeFailureTest-NodeA-7443: [-1], InfinispanNodeFailureTest-NodeB-62629: [-1] 14:17:30,654 TRACE [org.jgroups.protocols.tom.TOA] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) Handle view [InfinispanNodeFailureTest-NodeA-7443|3] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,654 TRACE [org.jgroups.protocols.MFC] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) new membership: [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,654 TRACE [org.jgroups.protocols.FRAG2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: removed InfinispanNodeFailureTest-NodeC-7981 from fragmentation table 14:17:30,654 TRACE [org.jgroups.protocols.FRAG2] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: removed InfinispanNodeFailureTest-NodeC-7981 from fragmentation table 14:17:30,654 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,654 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,655 DEBUG [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) New view accepted: [InfinispanNodeFailureTest-NodeA-7443|3] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,655 TRACE [org.jgroups.protocols.TCP_NIO2] (NioConnection.Reader [127.0.0.1:7901],InfinispanNodeFailureTest-NodeC-7981) 127.0.0.1:7902: removed connection to 127.0.0.1:7901 14:17:30,655 DEBUG [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) Joined: [], Left: [InfinispanNodeFailureTest-NodeC-7981] 14:17:30,655 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) ISPN000094: Received new cluster view for channel ISPN: [InfinispanNodeFailureTest-NodeA-7443|3] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,655 TRACE [org.jgroups.protocols.pbcast.GMS] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: got LEAVE response from InfinispanNodeFailureTest-NodeA-7443 14:17:30,655 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t5) Received new cluster view: 3, isCoordinator = false, old status = REGULAR_MEMBER 14:17:30,655 INFO [org.infinispan.CLUSTER] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) ISPN100001: Node InfinispanNodeFailureTest-NodeC-7981 left the cluster 14:17:30,655 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981 --> ACK(InfinispanNodeFailureTest-NodeB-62629: #1) 14:17:30,655 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: sending msg to InfinispanNodeFailureTest-NodeB-62629, src=InfinispanNodeFailureTest-NodeC-7981, headers are UNICAST3: ACK, seqno=1, conn_id=1, ts=1, TP: [cluster_name=ISPN] 14:17:30,655 TRACE [org.infinispan.transaction.impl.TransactionTable] (timeout-thread-InfinispanNodeFailureTest-NodeB-p7-t1) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], remote transactions: 1 14:17:30,656 TRACE [org.infinispan.container.versioning.NumericVersionGenerator] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) Calculated rank based on view EventImpl{type=VIEW_CHANGED, newMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], oldMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], localAddress=InfinispanNodeFailureTest-NodeB-62629, viewId=3, subgroupsMerged=null, mergeView=false} and result was 844433520066560 14:17:30,656 TRACE [org.infinispan.transaction.impl.TransactionTable] (timeout-thread-InfinispanNodeFailureTest-NodeB-p7-t1) Checking transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2 14:17:30,656 TRACE [org.infinispan.transaction.impl.TransactionTable] (timeout-thread-InfinispanNodeFailureTest-NodeB-p7-t1) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,656 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: sending 1 msgs (60 bytes (300.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeB-62629] 14:17:30,656 TRACE [org.infinispan.container.versioning.NumericVersionGenerator] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) Calculated rank based on view EventImpl{type=VIEW_CHANGED, newMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], oldMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], localAddress=InfinispanNodeFailureTest-NodeB-62629, viewId=3, subgroupsMerged=null, mergeView=false} and result was 844433520066560 14:17:30,656 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) dest=127.0.0.1:7901 (63 bytes) 14:17:30,656 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) 127.0.0.1:7902: server is not running, discarding message to 127.0.0.1:7901 14:17:30,656 TRACE [org.jgroups.protocols.UNICAST3] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #14, conn_id=0) 14:17:30,656 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981 --> ACK(InfinispanNodeFailureTest-NodeA-7443: #6) 14:17:30,656 TRACE [org.jgroups.protocols.TCP_NIO2] (Incoming-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are GMS: GmsHeader[VIEW_ACK], UNICAST3: DATA, seqno=14, TP: [cluster_name=ISPN] 14:17:30,656 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeC-7981: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeC-7981, headers are UNICAST3: ACK, seqno=6, conn_id=2, ts=2, TP: [cluster_name=ISPN] 14:17:30,656 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) InfinispanNodeFailureTest-NodeC-7981: sending 1 msgs (60 bytes (300.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,656 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) dest=127.0.0.1:7900 (63 bytes) 14:17:30,656 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeC-7981) 127.0.0.1:7902: server is not running, discarding message to 127.0.0.1:7900 14:17:30,656 DEBUG [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Stop discovery for InfinispanNodeFailureTest-NodeC-7981 14:17:30,656 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (67 bytes (335.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,656 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (70 bytes) 14:17:30,656 DEBUG [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) closing sockets and stopping threads 14:17:30,657 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=0 bytes, flags=OOB|INTERNAL], headers are GMS: GmsHeader[VIEW_ACK], UNICAST3: DATA, seqno=14, TP: [cluster_name=ISPN] 14:17:30,657 TRACE [org.jgroups.protocols.UNICAST3] (INT-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #14, conn_id=0) 14:17:30,657 TRACE [org.jgroups.protocols.UNICAST3] (INT-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#14 14:17:30,660 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) ISPN000082: Stopping the RpcDispatcher for channel ISPN 14:17:30,661 TRACE [org.infinispan.transaction.tm.DummyTransaction] (testng-InfinispanNodeFailureTest) Created new transaction with Xid=DummyXid{id=3} 14:17:30,661 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) About to send back response SuccessfulResponse{responseValue=true} for command LockControlCommand{cache=test_cache, keys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], flags=[], unlock=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2} 14:17:30,661 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Created a new local transaction: LocalXaTransaction{xid=null} LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, lockedKeys=[], backupKeyLocks=[], topologyId=8, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@3 14:17:30,661 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) sending rsp for 13 to InfinispanNodeFailureTest-NodeA-7443 14:17:30,661 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (testng-InfinispanNodeFailureTest) Invoked with command ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_EXPECTED} and InvocationContext [org.infinispan.context.impl.LocalTxInvocationContext@7af475ed] 14:17:30,661 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #15, conn_id=0) 14:17:30,661 TRACE [org.infinispan.statetransfer.StateTransferInterceptor] (testng-InfinispanNodeFailureTest) handleTxWriteCommand for command ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_EXPECTED}, origin null 14:17:30,661 TRACE [org.infinispan.transaction.tm.DummyTransaction] (testng-InfinispanNodeFailureTest) Transaction.enlistResource(TransactionXaAdapter{localTransaction=LocalXaTransaction{xid=null} LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, lockedKeys=[], backupKeyLocks=[], topologyId=8, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@3}) invoked in transaction with Xid=DummyXid{id=3} 14:17:30,661 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=13, rsp_expected=true, UNICAST3: DATA, seqno=15, TP: [cluster_name=ISPN] 14:17:30,661 TRACE [org.infinispan.transaction.tm.DummyTransaction] (testng-InfinispanNodeFailureTest) XaResource.start() invoked in transaction with Xid=DummyXid{id=3} 14:17:30,661 TRACE [org.infinispan.transaction.xa.XaTransactionTable] (testng-InfinispanNodeFailureTest) start called on tx GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3 14:17:30,661 TRACE [org.infinispan.interceptors.locking.PessimisticLockingInterceptor] (testng-InfinispanNodeFailureTest) acquireLocalLock 14:17:30,661 TRACE [org.infinispan.interceptors.locking.PessimisticLockingInterceptor] (testng-InfinispanNodeFailureTest) Acquiring locks on MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}. 14:17:30,662 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultPendingLockManager] (testng-InfinispanNodeFailureTest) Await for pending transactions for transaction GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3 using null 14:17:30,662 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultPendingLockManager] (testng-InfinispanNodeFailureTest) Checking for pending locks and then locking key MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} 14:17:30,662 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (77 bytes (385.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,662 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (80 bytes) 14:17:30,662 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=5 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=13, rsp_expected=true, UNICAST3: DATA, seqno=15, TP: [cluster_name=ISPN] 14:17:30,662 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #15, conn_id=0) 14:17:30,662 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#15 14:17:30,662 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Responses: Responses{ InfinispanNodeFailureTest-NodeB-62629: sender=InfinispanNodeFailureTest-NodeB-62629value=SuccessfulResponse{responseValue=true} , received=true, suspected=false InfinispanNodeFailureTest-NodeC-7981: sender=InfinispanNodeFailureTest-NodeC-7981value=SuccessfulResponse{responseValue=true} , received=true, suspected=true} 14:17:30,662 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Response(s) to LockControlCommand{cache=test_cache, keys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], flags=[], unlock=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2} is {InfinispanNodeFailureTest-NodeB-62629=SuccessfulResponse{responseValue=true} , InfinispanNodeFailureTest-NodeC-7981=SuccessfulResponse{responseValue=true} } 14:17:30,662 TRACE [org.infinispan.interceptors.locking.PessimisticLockingInterceptor] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) acquireLocalLock 14:17:30,662 TRACE [org.infinispan.interceptors.locking.PessimisticLockingInterceptor] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Acquiring backup locks on MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}. 14:17:30,663 TRACE [org.infinispan.container.EntryFactoryImpl] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Exists in context? null 14:17:30,663 TRACE [org.infinispan.container.EntryFactoryImpl] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Retrieved from container null 14:17:30,663 TRACE [org.infinispan.container.EntryFactoryImpl] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Creating new entry for key MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629} 14:17:30,663 TRACE [org.infinispan.container.EntryFactoryImpl] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Updated context entry ReadCommittedEntry(6d99ba63){key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=null, isCreated=true, isChanged=false, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=null} 14:17:30,663 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Invoking: PutKeyValueCommand 14:17:30,663 TRACE [org.infinispan.interceptors.impl.EntryWrappingInterceptor] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) The return value is null 14:17:30,663 TRACE [org.infinispan.transaction.impl.LocalTransaction] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Adding modification PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}. Mod list is [ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}] 14:17:30,663 TRACE [org.infinispan.interceptors.impl.ComposedAsyncInvocationStage] (ForkThread-1,InfinispanNodeFailureTest) 6F2F8218 awaiting future java.util.concurrent.CompletableFuture@909a27[Completed normally] finished with ReturnValueStage(null)=579DAE7F 14:17:30,663 TRACE [org.infinispan.transaction.tm.DummyTransaction] (ForkThread-1,InfinispanNodeFailureTest) Transaction.commit() invoked in transaction with Xid=DummyXid{id=2} 14:17:30,663 TRACE [org.infinispan.transaction.tm.DummyTransaction] (ForkThread-1,InfinispanNodeFailureTest) runPrepare() invoked in transaction with Xid=DummyXid{id=2} 14:17:30,663 DEBUG [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) New view accepted: [InfinispanNodeFailureTest-NodeA-7443|3] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,663 DEBUG [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) Joined: [], Left: [InfinispanNodeFailureTest-NodeC-7981] 14:17:30,663 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) ISPN000094: Received new cluster view for channel ISPN: [InfinispanNodeFailureTest-NodeA-7443|3] (2) [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,663 TRACE [org.infinispan.transaction.tm.DummyTransaction] (ForkThread-1,InfinispanNodeFailureTest) XAResource.end() for TransactionXaAdapter{localTransaction=LocalXaTransaction{xid=DummyXid{id=2}} LocalTransaction{remoteLockedNodes=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], isMarkedForRollback=false, lockedKeys=[MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}], backupKeyLocks=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], topologyId=5, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@2} 14:17:30,663 TRACE [org.infinispan.transaction.xa.XaTransactionTable] (ForkThread-1,InfinispanNodeFailureTest) end called on tx GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2(test_cache) 14:17:30,663 TRACE [org.infinispan.transaction.tm.DummyTransaction] (ForkThread-1,InfinispanNodeFailureTest) XaResource.prepare() for TransactionXaAdapter{localTransaction=LocalXaTransaction{xid=DummyXid{id=2}} LocalTransaction{remoteLockedNodes=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], isMarkedForRollback=false, lockedKeys=[MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}], backupKeyLocks=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], topologyId=5, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@2} 14:17:30,663 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Received new cluster view: 3, isCoordinator = true, old status = COORDINATOR 14:17:30,663 TRACE [org.infinispan.transaction.impl.TransactionCoordinator] (ForkThread-1,InfinispanNodeFailureTest) Received prepare for tx: LocalXaTransaction{xid=DummyXid{id=2}} LocalTransaction{remoteLockedNodes=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], isMarkedForRollback=false, lockedKeys=[MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}], backupKeyLocks=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], topologyId=5, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@2. Skipping call as 1PC will be used. 14:17:30,663 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Updating cluster members for all the caches. New list is [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,663 TRACE [org.infinispan.transaction.tm.DummyTransaction] (ForkThread-1,InfinispanNodeFailureTest) runCommit(forceRollback=false) invoked in transaction with Xid=DummyXid{id=2} 14:17:30,663 INFO [org.infinispan.CLUSTER] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) ISPN100001: Node InfinispanNodeFailureTest-NodeC-7981 left the cluster 14:17:30,663 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) dests=null, command=CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=-1}, mode=SYNCHRONOUS, timeout=240000 14:17:30,663 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultPendingLockManager] (testng-InfinispanNodeFailureTest) Checking for pending locks: [] 14:17:30,663 TRACE [org.infinispan.transaction.tm.DummyTransaction] (ForkThread-1,InfinispanNodeFailureTest) XaResource.commit() for TransactionXaAdapter{localTransaction=LocalXaTransaction{xid=DummyXid{id=2}} LocalTransaction{remoteLockedNodes=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], isMarkedForRollback=false, lockedKeys=[MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}], backupKeyLocks=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], topologyId=5, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@2} 14:17:30,663 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultPendingLockManager] (testng-InfinispanNodeFailureTest) Finished waiting for other potential lockers. Timed-Out? false 14:17:30,663 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Replication task sending CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=-1} to single recipient InfinispanNodeFailureTest-NodeB-62629 with response mode GET_ALL 14:17:30,663 TRACE [org.infinispan.transaction.impl.TransactionCoordinator] (ForkThread-1,InfinispanNodeFailureTest) Committing transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2 14:17:30,663 TRACE [org.infinispan.transaction.impl.TransactionCoordinator] (ForkThread-1,InfinispanNodeFailureTest) Doing an 1PC prepare call on the interceptor chain 14:17:30,663 TRACE [org.infinispan.container.versioning.NumericVersionGenerator] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) Calculated rank based on view EventImpl{type=VIEW_CHANGED, newMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], oldMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], localAddress=InfinispanNodeFailureTest-NodeA-7443, viewId=3, subgroupsMerged=null, mergeView=false} and result was 844429225099264 14:17:30,663 TRACE [org.infinispan.container.versioning.NumericVersionGenerator] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) Calculated rank based on view EventImpl{type=VIEW_CHANGED, newMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], oldMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], localAddress=InfinispanNodeFailureTest-NodeA-7443, viewId=3, subgroupsMerged=null, mergeView=false} and result was 844429225099264 14:17:30,663 TRACE [org.infinispan.transaction.impl.AbstractCacheTransaction] (testng-InfinispanNodeFailureTest) Registering locked key: MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} 14:17:30,664 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultLockManager] (testng-InfinispanNodeFailureTest) Lock key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} for owner=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3. timeout=19998 (MILLISECONDS) 14:17:30,664 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (ForkThread-1,InfinispanNodeFailureTest) Invoked with command PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}, PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName='test_cache', topologyId=-1} and InvocationContext [org.infinispan.context.impl.LocalTxInvocationContext@67bff9f2] 14:17:30,664 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (testng-InfinispanNodeFailureTest) Acquire lock for GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3. Timeout=19998 (MILLISECONDS) 14:17:30,664 TRACE [org.jgroups.blocks.RequestCorrelator] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) InfinispanNodeFailureTest-NodeA-7443: invoking unicast RPC [req-id=15] on InfinispanNodeFailureTest-NodeB-62629 14:17:30,664 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (testng-InfinispanNodeFailureTest) Created a new one: LockPlaceHolder{lockState=WAITING, owner=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3} 14:17:30,664 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (testng-InfinispanNodeFailureTest) Try acquire. Next in queue=LockPlaceHolder{lockState=WAITING, owner=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3}. Current=null 14:17:30,664 TRACE [org.jgroups.protocols.UNICAST3] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeB-62629: #7, conn_id=1) 14:17:30,664 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (testng-InfinispanNodeFailureTest) Lock Owner CAS(null, LockPlaceHolder{lockState=WAITING, owner=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3}) => true 14:17:30,664 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (testng-InfinispanNodeFailureTest) State changed for LockPlaceHolder{lockState=ACQUIRED, owner=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3}. WAITING => ACQUIRED 14:17:30,664 TRACE [org.infinispan.transaction.impl.TransactionTable] (timeout-thread-InfinispanNodeFailureTest-NodeA-p3-t1) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], remote transactions: 0 14:17:30,664 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (testng-InfinispanNodeFailureTest) LockPlaceHolder{lockState=ACQUIRED, owner=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3} successfully acquired the lock. 14:17:30,664 TRACE [org.infinispan.transaction.impl.TransactionTable] (timeout-thread-InfinispanNodeFailureTest-NodeA-p3-t1) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,664 TRACE [org.jgroups.protocols.TCP_NIO2] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeB-62629, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=15, rsp_expected=true, UNICAST3: DATA, seqno=7, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,664 TRACE [org.jgroups.protocols.UNICAST3] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #3, conn_id=0) 14:17:30,664 TRACE [org.infinispan.statetransfer.StateTransferInterceptor] (ForkThread-1,InfinispanNodeFailureTest) handleTxCommand for command PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}, PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName='test_cache', topologyId=-1}, origin null 14:17:30,664 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (ForkThread-1,InfinispanNodeFailureTest) Invoking: PrepareCommand 14:17:30,664 TRACE [org.jgroups.protocols.TCP_NIO2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeA-7443, headers are GMS: GmsHeader[VIEW_ACK], UNICAST3: DATA, seqno=3, TP: [cluster_name=ISPN] 14:17:30,664 TRACE [org.infinispan.interceptors.distribution.BaseDistributionInterceptor] (ForkThread-1,InfinispanNodeFailureTest) Should invoke remotely? true. hasModifications=true, hasRemoteLocksAcquired=true 14:17:30,664 TRACE [org.jgroups.protocols.TCP_NIO2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: looping back message [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=0 bytes, flags=OOB|INTERNAL] 14:17:30,664 TRACE [org.infinispan.container.EntryFactoryImpl] (testng-InfinispanNodeFailureTest) Exists in context? null 14:17:30,664 TRACE [org.infinispan.container.EntryFactoryImpl] (testng-InfinispanNodeFailureTest) Retrieved from container ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0} 14:17:30,664 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (80 bytes (400.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeB-62629] 14:17:30,664 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (ForkThread-1,InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443 invoking PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}, PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName='test_cache', topologyId=8} to recipient list null with options RpcOptions{timeout=15000, unit=MILLISECONDS, deliverOrder=NONE, responseFilter=null, responseMode=SYNCHRONOUS_IGNORE_LEAVERS} 14:17:30,664 TRACE [org.infinispan.container.EntryFactoryImpl] (testng-InfinispanNodeFailureTest) Creating new entry for key MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} 14:17:30,664 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=0 bytes, flags=OOB|INTERNAL], headers are GMS: GmsHeader[VIEW_ACK], UNICAST3: DATA, seqno=3, TP: [cluster_name=ISPN] 14:17:30,664 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (83 bytes) 14:17:30,664 TRACE [org.jgroups.protocols.UNICAST3] (INT-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #3, conn_id=0) 14:17:30,664 TRACE [org.infinispan.container.EntryFactoryImpl] (testng-InfinispanNodeFailureTest) Updated context entry ReadCommittedEntry(1ce2565d){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, isCreated=false, isChanged=false, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedMetadata{version=null}} 14:17:30,664 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ForkThread-1,InfinispanNodeFailureTest) dests=null, command=PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}, PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName='test_cache', topologyId=8}, mode=SYNCHRONOUS_IGNORE_LEAVERS, timeout=15000 14:17:30,664 TRACE [org.jgroups.protocols.UNICAST3] (INT-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeA-7443#3 14:17:30,664 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (testng-InfinispanNodeFailureTest) Invoking: ReplaceCommand 14:17:30,664 TRACE [org.infinispan.interceptors.impl.EntryWrappingInterceptor] (testng-InfinispanNodeFailureTest) The return value is true 14:17:30,664 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (ForkThread-1,InfinispanNodeFailureTest) Replication task sending PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}, PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName='test_cache', topologyId=8} to addresses null with response mode GET_ALL 14:17:30,664 TRACE [org.jgroups.protocols.pbcast.GMS] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: got all ACKs (2) from members for view [InfinispanNodeFailureTest-NodeA-7443|3] 14:17:30,664 TRACE [org.infinispan.transaction.impl.LocalTransaction] (testng-InfinispanNodeFailureTest) Adding modification ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}. Mod list is null 14:17:30,664 TRACE [org.infinispan.transaction.tm.DummyTransaction] (testng-InfinispanNodeFailureTest) Transaction.commit() invoked in transaction with Xid=DummyXid{id=3} 14:17:30,664 TRACE [org.infinispan.transaction.tm.DummyTransaction] (testng-InfinispanNodeFailureTest) runPrepare() invoked in transaction with Xid=DummyXid{id=3} 14:17:30,664 TRACE [org.infinispan.transaction.tm.DummyTransaction] (testng-InfinispanNodeFailureTest) XAResource.end() for TransactionXaAdapter{localTransaction=LocalXaTransaction{xid=DummyXid{id=3}} LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, lockedKeys=[MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}], backupKeyLocks=[], topologyId=8, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@3} 14:17:30,664 TRACE [org.infinispan.transaction.xa.XaTransactionTable] (testng-InfinispanNodeFailureTest) end called on tx GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3(test_cache) 14:17:30,664 TRACE [org.infinispan.transaction.tm.DummyTransaction] (testng-InfinispanNodeFailureTest) XaResource.prepare() for TransactionXaAdapter{localTransaction=LocalXaTransaction{xid=DummyXid{id=3}} LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, lockedKeys=[MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}], backupKeyLocks=[], topologyId=8, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@3} 14:17:30,664 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: InfinispanNodeFailureTest-NodeB-62629, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=8 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=15, rsp_expected=true, UNICAST3: DATA, seqno=7, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,664 TRACE [org.infinispan.transaction.impl.TransactionCoordinator] (testng-InfinispanNodeFailureTest) Received prepare for tx: LocalXaTransaction{xid=DummyXid{id=3}} LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, lockedKeys=[MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}], backupKeyLocks=[], topologyId=8, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@3. Skipping call as 1PC will be used. 14:17:30,665 TRACE [org.infinispan.transaction.tm.DummyTransaction] (testng-InfinispanNodeFailureTest) runCommit(forceRollback=false) invoked in transaction with Xid=DummyXid{id=3} 14:17:30,665 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #7, conn_id=1) 14:17:30,665 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#7 14:17:30,665 TRACE [org.infinispan.transaction.tm.DummyTransaction] (testng-InfinispanNodeFailureTest) XaResource.commit() for TransactionXaAdapter{localTransaction=LocalXaTransaction{xid=DummyXid{id=3}} LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, lockedKeys=[MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}], backupKeyLocks=[], topologyId=8, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@3} 14:17:30,665 TRACE [org.infinispan.transaction.impl.TransactionCoordinator] (testng-InfinispanNodeFailureTest) Committing transaction GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3 14:17:30,665 TRACE [org.infinispan.transaction.impl.TransactionCoordinator] (testng-InfinispanNodeFailureTest) Doing an 1PC prepare call on the interceptor chain 14:17:30,665 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 15 14:17:30,665 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (testng-InfinispanNodeFailureTest) Invoked with command PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName='test_cache', topologyId=-1} and InvocationContext [org.infinispan.context.impl.LocalTxInvocationContext@b045b7e] 14:17:30,665 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=0} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,665 TRACE [org.infinispan.statetransfer.StateTransferInterceptor] (testng-InfinispanNodeFailureTest) handleTxCommand for command PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName='test_cache', topologyId=-1}, origin null 14:17:30,665 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,665 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (testng-InfinispanNodeFailureTest) Invoking: PrepareCommand 14:17:30,665 TRACE [org.infinispan.interceptors.distribution.BaseDistributionInterceptor] (testng-InfinispanNodeFailureTest) Should invoke remotely? true. hasModifications=true, hasRemoteLocksAcquired=false 14:17:30,665 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629 invoking PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName='test_cache', topologyId=8} to recipient list null with options RpcOptions{timeout=15000, unit=MILLISECONDS, deliverOrder=NONE, responseFilter=null, responseMode=SYNCHRONOUS_IGNORE_LEAVERS} 14:17:30,665 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) About to send back response SuccessfulResponse{responseValue=true} for command CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=0} 14:17:30,665 TRACE [org.jgroups.blocks.RequestCorrelator] (ForkThread-1,InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: invoking multicast RPC [req-id=16] 14:17:30,665 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) dests=null, command=PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName='test_cache', topologyId=8}, mode=SYNCHRONOUS_IGNORE_LEAVERS, timeout=15000 14:17:30,665 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) sending rsp for 15 to InfinispanNodeFailureTest-NodeA-7443 14:17:30,665 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (ForkThread-1,InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#19 14:17:30,665 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (testng-InfinispanNodeFailureTest) Replication task sending PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName='test_cache', topologyId=8} to addresses null with response mode GET_ALL 14:17:30,665 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #16, conn_id=0) 14:17:30,665 TRACE [org.jgroups.protocols.TCP_NIO2] (ForkThread-1,InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=16, rsp_expected=true, NAKACK2: [MSG, seqno=19], TP: [cluster_name=ISPN] 14:17:30,665 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=15, rsp_expected=true, UNICAST3: DATA, seqno=16, TP: [cluster_name=ISPN] 14:17:30,665 TRACE [org.jgroups.protocols.MFC] (ForkThread-1,InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443 used 445 credits, 1991634 remaining 14:17:30,665 TRACE [org.infinispan.interceptors.impl.AsyncInvocationStage] (ForkThread-1,InfinispanNodeFailureTest) 7E6DD0FE thenAccept org.infinispan.interceptors.impl.AsyncInvocationStage@7a8986d7 = 7A8986D7 14:17:30,665 TRACE [org.jgroups.blocks.RequestCorrelator] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: invoking multicast RPC [req-id=17] 14:17:30,665 TRACE [org.infinispan.interceptors.impl.AsyncInvocationStage] (ForkThread-1,InfinispanNodeFailureTest) 7A8986D7 thenAccept org.infinispan.interceptors.impl.AsyncInvocationStage@27dfa123 = 27DFA123 14:17:30,665 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (77 bytes (385.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,665 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: sending InfinispanNodeFailureTest-NodeB-62629#1 14:17:30,665 TRACE [org.infinispan.interceptors.impl.AsyncInvocationStage] (ForkThread-1,InfinispanNodeFailureTest) 27DFA123 thenAccept org.infinispan.interceptors.impl.AsyncInvocationStage@7ce50fdd = 7CE50FDD 14:17:30,665 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (80 bytes) 14:17:30,665 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: sending msg to null, src=InfinispanNodeFailureTest-NodeB-62629, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=17, rsp_expected=true, NAKACK2: [MSG, seqno=1], TP: [cluster_name=ISPN] 14:17:30,665 TRACE [org.infinispan.interceptors.impl.ComposedAsyncInvocationStage] (ForkThread-1,InfinispanNodeFailureTest) Compose 6C9FDA9F (java.util.concurrent.CompletableFuture@5a9656e5[Not completed, 1 dependents]), result future java.util.concurrent.CompletableFuture@1a222d[Not completed] 14:17:30,665 TRACE [org.infinispan.interceptors.impl.ComposedAsyncInvocationStage] (ForkThread-1,InfinispanNodeFailureTest) 54554E14 awaiting future java.util.concurrent.CompletableFuture@1a222d[Not completed] 14:17:30,665 TRACE [org.jgroups.protocols.MFC] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629 used 305 credits, 1999695 remaining 14:17:30,665 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (497 bytes (2485.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,666 TRACE [org.infinispan.interceptors.impl.AsyncInvocationStage] (testng-InfinispanNodeFailureTest) 7CB6119E thenAccept org.infinispan.interceptors.impl.AsyncInvocationStage@1e52fd38 = 1E52FD38 14:17:30,666 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (500 bytes) 14:17:30,666 TRACE [org.infinispan.interceptors.impl.AsyncInvocationStage] (testng-InfinispanNodeFailureTest) 1E52FD38 thenAccept org.infinispan.interceptors.impl.AsyncInvocationStage@562aea2 = 0562AEA2 14:17:30,666 TRACE [org.infinispan.interceptors.impl.AsyncInvocationStage] (testng-InfinispanNodeFailureTest) 0562AEA2 thenAccept org.infinispan.interceptors.impl.AsyncInvocationStage@412d930 = 0412D930 14:17:30,666 TRACE [org.infinispan.interceptors.impl.ComposedAsyncInvocationStage] (testng-InfinispanNodeFailureTest) Compose 48E412CD (java.util.concurrent.CompletableFuture@41d69dd4[Not completed, 1 dependents]), result future java.util.concurrent.CompletableFuture@51d12cf2[Not completed] 14:17:30,666 TRACE [org.infinispan.interceptors.impl.ComposedAsyncInvocationStage] (testng-InfinispanNodeFailureTest) 4318B4A1 awaiting future java.util.concurrent.CompletableFuture@51d12cf2[Not completed] 14:17:30,666 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (357 bytes (1785.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,666 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (360 bytes) 14:17:30,666 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=445 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=16, rsp_expected=true, NAKACK2: [MSG, seqno=19], TP: [cluster_name=ISPN] 14:17:30,666 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received InfinispanNodeFailureTest-NodeA-7443#19 14:17:30,666 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#19 14:17:30,666 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=5 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=15, rsp_expected=true, UNICAST3: DATA, seqno=16, TP: [cluster_name=ISPN] 14:17:30,666 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 16 14:17:30,666 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #16, conn_id=0) 14:17:30,666 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#16 14:17:30,666 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: , src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=305 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=17, rsp_expected=true, NAKACK2: [MSG, seqno=1], TP: [cluster_name=ISPN] 14:17:30,666 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received InfinispanNodeFailureTest-NodeB-62629#1 14:17:30,666 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (OOB-2,InfinispanNodeFailureTest-NodeA-7443) Responses: sender=InfinispanNodeFailureTest-NodeB-62629value=SuccessfulResponse{responseValue=true} , received=true, suspected=false 14:17:30,666 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#1 14:17:30,666 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 17 14:17:30,667 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Attempting to execute CacheRpcCommand: PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName='test_cache', topologyId=8} [sender=InfinispanNodeFailureTest-NodeB-62629] 14:17:30,667 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute CacheRpcCommand: PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}, PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName='test_cache', topologyId=8} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,667 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Checking if transaction data was received for topology 8, current topology is 8 14:17:30,667 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Checking if transaction data was received for topology 8, current topology is 8 14:17:30,667 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,667 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,667 TRACE [org.infinispan.topology.ClusterCacheStatus] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Cluster members updated for cache test_cache, no abrupt leavers detected: cache members = [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629]. Existing members = [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629] 14:17:30,667 TRACE [org.jgroups.protocols.MFC] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeB-62629 used 305 credits, 1999695 remaining 14:17:30,667 TRACE [org.infinispan.remoting.inboundhandler.NonTotalOrderTxPerCacheInboundInvocationHandler] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Calling perform() on PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName='test_cache', topologyId=8} 14:17:30,667 TRACE [org.jgroups.protocols.MFC] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeA-7443 used 445 credits, 1991634 remaining 14:17:30,667 TRACE [org.infinispan.remoting.inboundhandler.NonTotalOrderTxPerCacheInboundInvocationHandler] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Calling perform() on PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}, PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName='test_cache', topologyId=8} 14:17:30,667 TRACE [org.infinispan.transaction.impl.TransactionTable] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Created and registered remote transaction RemoteTransaction{modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], lookedUpEntries={}, lockedKeys=[], backupKeyLocks=[], lookedUpEntriesTopology=2147483647, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, state=null} 14:17:30,667 TRACE [org.infinispan.partitionhandling.impl.PreferAvailabilityStrategy] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t6) Cache test_cache did not lose any members, skipping rebalance 14:17:30,667 TRACE [org.infinispan.commands.tx.PrepareCommand] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Invoking remotely originated prepare: PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName='test_cache', topologyId=8} with invocation context: org.infinispan.context.impl.RemoteTxInvocationContext@80052a10 14:17:30,667 TRACE [org.infinispan.commands.tx.PrepareCommand] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Invoking remotely originated prepare: PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}, PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName='test_cache', topologyId=8} with invocation context: org.infinispan.context.impl.RemoteTxInvocationContext@a70e38da 14:17:30,667 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Invoked with command PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName='test_cache', topologyId=8} and InvocationContext [org.infinispan.context.impl.RemoteTxInvocationContext@80052a10] 14:17:30,667 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Invoked with command PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}, PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName='test_cache', topologyId=8} and InvocationContext [org.infinispan.context.impl.RemoteTxInvocationContext@a70e38da] 14:17:30,667 TRACE [org.infinispan.statetransfer.StateTransferInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) handleTxCommand for command PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName='test_cache', topologyId=8}, origin InfinispanNodeFailureTest-NodeB-62629 14:17:30,668 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Exists in context? null 14:17:30,667 TRACE [org.infinispan.statetransfer.StateTransferInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) handleTxCommand for command PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}, PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName='test_cache', topologyId=8}, origin InfinispanNodeFailureTest-NodeA-7443 14:17:30,668 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Retrieved from container ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0} 14:17:30,668 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Creating new entry for key MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} 14:17:30,668 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Exists in context? null 14:17:30,668 TRACE [org.infinispan.transaction.impl.RemoteTransaction] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Adding key MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} to tx GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3 14:17:30,668 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Retrieved from container ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0} 14:17:30,668 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Creating new entry for key MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} 14:17:30,668 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Updated context entry ReadCommittedEntry(5dc63833){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, isCreated=false, isChanged=false, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedMetadata{version=null}} 14:17:30,668 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Invoking: ReplaceCommand 14:17:30,668 TRACE [org.infinispan.transaction.impl.RemoteTransaction] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Adding key MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} to tx GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2 14:17:30,668 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Updated context entry ReadCommittedEntry(15036738){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0, isCreated=false, isChanged=false, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedMetadata{version=null}} 14:17:30,668 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Invoking: PrepareCommand 14:17:30,668 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Invoking: ReplaceCommand 14:17:30,668 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Exists in context? null 14:17:30,668 TRACE [org.infinispan.interceptors.impl.EntryWrappingInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) About to commit entry ReadCommittedEntry(5dc63833){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1, isCreated=false, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}} 14:17:30,668 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Retrieved from container null 14:17:30,668 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Creating new entry for key MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629} 14:17:30,668 TRACE [org.infinispan.transaction.impl.RemoteTransaction] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Adding key MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629} to tx GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2 14:17:30,668 TRACE [org.infinispan.statetransfer.CommitManager] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Trying to commit. Key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}. Operation Flag=null, L1 invalidation=false 14:17:30,668 TRACE [org.infinispan.statetransfer.CommitManager] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Committing key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}. It is a L1 invalidation or a normal put and no tracking is enabled! 14:17:30,668 TRACE [org.infinispan.container.EntryFactoryImpl] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Updated context entry ReadCommittedEntry(446bb78f){key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=null, isCreated=true, isChanged=false, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=null} 14:17:30,668 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Invoking: PutKeyValueCommand 14:17:30,668 TRACE [org.infinispan.container.DefaultDataContainer] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Creating new ICE for writing. Existing=ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0}, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, new value=1 14:17:30,668 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Invoking: PrepareCommand 14:17:30,668 TRACE [org.infinispan.container.DefaultDataContainer] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Store ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1} in container 14:17:30,668 TRACE [org.infinispan.interceptors.impl.EntryWrappingInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) About to commit entry ReadCommittedEntry(446bb78f){key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, isCreated=true, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}} 14:17:30,668 TRACE [org.infinispan.statetransfer.CommitManager] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Trying to commit. Key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}. Operation Flag=null, L1 invalidation=false 14:17:30,668 TRACE [org.infinispan.statetransfer.CommitManager] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Committing key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}. It is a L1 invalidation or a normal put and no tracking is enabled! 14:17:30,668 TRACE [org.infinispan.container.DefaultDataContainer] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Creating new ICE for writing. Existing=null, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, new value=some-value 14:17:30,668 TRACE [org.infinispan.container.DefaultDataContainer] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Store ImmortalCacheEntry{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value} in container 14:17:30,668 TRACE [org.infinispan.interceptors.impl.EntryWrappingInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) About to commit entry ReadCommittedEntry(15036738){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1, isCreated=false, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}} 14:17:30,668 TRACE [org.infinispan.statetransfer.CommitManager] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Trying to commit. Key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}. Operation Flag=null, L1 invalidation=false 14:17:30,668 TRACE [org.infinispan.statetransfer.CommitManager] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Committing key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}. It is a L1 invalidation or a normal put and no tracking is enabled! 14:17:30,668 TRACE [org.infinispan.container.DefaultDataContainer] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Creating new ICE for writing. Existing=ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=0}, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, new value=1 14:17:30,668 TRACE [org.infinispan.container.DefaultDataContainer] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Store ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1} in container 14:17:30,669 TRACE [org.infinispan.interceptors.impl.TxInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Verifying transaction: originatorMissing=false, alreadyCompleted=false 14:17:30,669 TRACE [org.infinispan.interceptors.impl.TxInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Verifying transaction: originatorMissing=false, alreadyCompleted=false 14:17:30,670 TRACE [org.infinispan.statetransfer.TransactionSynchronizerInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Completing tx command release future for RemoteTransaction{modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], lookedUpEntries={MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}=ReadCommittedEntry(5dc63833){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1, isCreated=false, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}}}, lockedKeys=[], backupKeyLocks=[], lookedUpEntriesTopology=8, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, state=null} 14:17:30,670 TRACE [org.infinispan.statetransfer.TransactionSynchronizerInterceptor] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Completing tx command release future for RemoteTransaction{modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}, PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], lookedUpEntries={MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}=ReadCommittedEntry(446bb78f){key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, isCreated=true, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}}, MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}=ReadCommittedEntry(15036738){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1, isCreated=false, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}}}, lockedKeys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], backupKeyLocks=[], lookedUpEntriesTopology=8, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, state=null} 14:17:30,670 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) About to send back response null for command PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName='test_cache', topologyId=8} 14:17:30,670 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) About to send back response null for command PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}, PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName='test_cache', topologyId=8} 14:17:30,670 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) sending rsp for 17 to InfinispanNodeFailureTest-NodeB-62629 14:17:30,670 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) sending rsp for 16 to InfinispanNodeFailureTest-NodeA-7443 14:17:30,670 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #17, conn_id=0) 14:17:30,670 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeB-62629: #8, conn_id=1) 14:17:30,670 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeB-62629, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=17, rsp_expected=true, UNICAST3: DATA, seqno=8, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,670 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=16, rsp_expected=true, UNICAST3: DATA, seqno=17, TP: [cluster_name=ISPN] 14:17:30,670 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (74 bytes (370.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeB-62629] 14:17:30,670 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (77 bytes) 14:17:30,670 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (74 bytes (370.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,671 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (77 bytes) 14:17:30,671 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: InfinispanNodeFailureTest-NodeB-62629, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=2 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=17, rsp_expected=true, UNICAST3: DATA, seqno=8, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,671 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #8, conn_id=1) 14:17:30,671 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#8 14:17:30,671 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Responses: Responses{ InfinispanNodeFailureTest-NodeA-7443: sender=InfinispanNodeFailureTest-NodeA-7443, received=true, suspected=false} 14:17:30,671 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=2 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=16, rsp_expected=true, UNICAST3: DATA, seqno=17, TP: [cluster_name=ISPN] 14:17:30,671 TRACE [org.jgroups.protocols.UNICAST3] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #17, conn_id=0) 14:17:30,671 TRACE [org.jgroups.protocols.UNICAST3] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#17 14:17:30,671 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Response(s) to PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName='test_cache', topologyId=8} is {InfinispanNodeFailureTest-NodeA-7443=SuccessfulResponse{responseValue=null} } 14:17:30,671 TRACE [org.infinispan.transaction.impl.LocalTransaction] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Adding remote locks on [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629]. Remote locks are null 14:17:30,671 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Responses: Responses{ InfinispanNodeFailureTest-NodeB-62629: sender=InfinispanNodeFailureTest-NodeB-62629, received=true, suspected=false} 14:17:30,671 TRACE [org.infinispan.interceptors.impl.AsyncInvocationStage] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Executing invocation handler EntryWrappingInterceptor$$Lambda$53/1706185143 with command PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName='test_cache', topologyId=8} 14:17:30,671 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Response(s) to PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}, PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName='test_cache', topologyId=8} is {InfinispanNodeFailureTest-NodeB-62629=SuccessfulResponse{responseValue=null} } 14:17:30,671 TRACE [org.infinispan.interceptors.impl.EntryWrappingInterceptor] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) About to commit entry ReadCommittedEntry(1ce2565d){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1, isCreated=false, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}} 14:17:30,671 TRACE [org.infinispan.transaction.impl.LocalTransaction] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Adding remote locks on [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629]. Remote locks are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,671 TRACE [org.infinispan.statetransfer.CommitManager] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Trying to commit. Key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}. Operation Flag=null, L1 invalidation=false 14:17:30,671 TRACE [org.infinispan.statetransfer.CommitManager] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Committing key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}. It is a L1 invalidation or a normal put and no tracking is enabled! 14:17:30,671 TRACE [org.infinispan.interceptors.impl.AsyncInvocationStage] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Executing invocation handler EntryWrappingInterceptor$$Lambda$53/1706185143 with command PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}, PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName='test_cache', topologyId=8} 14:17:30,671 TRACE [org.infinispan.container.DefaultDataContainer] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Creating new ICE for writing. Existing=ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1}, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, new value=1 14:17:30,671 TRACE [org.infinispan.interceptors.impl.EntryWrappingInterceptor] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) About to commit entry ReadCommittedEntry(6d99ba63){key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, isCreated=true, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}} 14:17:30,671 TRACE [org.infinispan.container.DefaultDataContainer] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Store ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1} in container 14:17:30,671 TRACE [org.infinispan.statetransfer.CommitManager] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Trying to commit. Key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}. Operation Flag=null, L1 invalidation=false 14:17:30,671 TRACE [org.infinispan.statetransfer.CommitManager] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Committing key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}. It is a L1 invalidation or a normal put and no tracking is enabled! 14:17:30,671 TRACE [org.infinispan.container.DefaultDataContainer] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Creating new ICE for writing. Existing=null, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, new value=some-value 14:17:30,671 TRACE [org.infinispan.interceptors.impl.AsyncInvocationStage] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Executing invocation handler NotificationInterceptor$1 with command PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName='test_cache', topologyId=8} 14:17:30,671 TRACE [org.infinispan.container.DefaultDataContainer] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Store ImmortalCacheEntry{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value} in container 14:17:30,671 TRACE [org.infinispan.interceptors.impl.AsyncInvocationStage] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Executing invocation handler PessimisticLockingInterceptor$$Lambda$112/1559840931 with command PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName='test_cache', topologyId=8} 14:17:30,671 TRACE [org.infinispan.interceptors.impl.EntryWrappingInterceptor] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) About to commit entry ReadCommittedEntry(701ba33c){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1, isCreated=false, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}} 14:17:30,671 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultLockManager] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Release locks for keys=[MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}]. owner=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3 14:17:30,672 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Release lock for GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3. 14:17:30,672 TRACE [org.infinispan.statetransfer.CommitManager] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Trying to commit. Key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}. Operation Flag=null, L1 invalidation=false 14:17:30,672 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) State changed for LockPlaceHolder{lockState=RELEASED, owner=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3}. ACQUIRED => RELEASED 14:17:30,672 TRACE [org.infinispan.statetransfer.CommitManager] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Committing key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}. It is a L1 invalidation or a normal put and no tracking is enabled! 14:17:30,672 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Release lock for LockPlaceHolder{lockState=RELEASED, owner=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3}? true 14:17:30,672 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Try acquire. Next in queue=null. Current=LockPlaceHolder{lockState=RELEASED, owner=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3} 14:17:30,672 TRACE [org.infinispan.container.DefaultDataContainer] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Creating new ICE for writing. Existing=ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1}, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, new value=1 14:17:30,672 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Lock Owner CAS(LockPlaceHolder{lockState=RELEASED, owner=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3}, null) => true 14:17:30,672 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Try acquire. Next in queue=null. Current=null 14:17:30,672 TRACE [org.infinispan.container.DefaultDataContainer] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Store ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1} in container 14:17:30,672 TRACE [org.infinispan.transaction.impl.AbstractCacheTransaction] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Clearing locked keys: [MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}] 14:17:30,672 TRACE [org.infinispan.interceptors.impl.AsyncInvocationStage] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Executing invocation handler NotificationInterceptor$1 with command PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}, PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName='test_cache', topologyId=8} 14:17:30,672 TRACE [org.infinispan.interceptors.impl.ComposedAsyncInvocationStage] (testng-InfinispanNodeFailureTest) 4318B4A1 awaiting future java.util.concurrent.CompletableFuture@51d12cf2[Completed normally] finished with ReturnValueStage(null)=2B02A475 14:17:30,672 TRACE [org.infinispan.interceptors.impl.AsyncInvocationStage] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Executing invocation handler PessimisticLockingInterceptor$$Lambda$112/1559840931 with command PrepareCommand {modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}, PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], onePhaseCommit=true, retried=false, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName='test_cache', topologyId=8} 14:17:30,672 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Removed LocalXaTransaction{xid=DummyXid{id=3}} LocalTransaction{remoteLockedNodes=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], isMarkedForRollback=false, lockedKeys=[], backupKeyLocks=[], topologyId=8, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@3 from transaction table. 14:17:30,672 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultLockManager] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Release locks for keys=[MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}]. owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2 14:17:30,672 TRACE [org.infinispan.transaction.impl.AbstractCacheTransaction] (testng-InfinispanNodeFailureTest) Transaction GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3 has completed, notifying listening threads. 14:17:30,672 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Committed in onePhase? true isOptimistic? false 14:17:30,672 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Release lock for GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2. 14:17:30,672 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) State changed for LockPlaceHolder{lockState=RELEASED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2}. ACQUIRED => RELEASED 14:17:30,672 TRACE [org.infinispan.transaction.impl.LocalTransaction] (testng-InfinispanNodeFailureTest) getCommitNodes recipients=null, currentTopologyId=8, members=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], txTopologyId=8 14:17:30,672 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Release lock for LockPlaceHolder{lockState=RELEASED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2}? true 14:17:30,672 TRACE [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) About to invoke tx completion notification on commitNodes: null 14:17:30,672 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Try acquire. Next in queue=null. Current=LockPlaceHolder{lockState=RELEASED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2} 14:17:30,672 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Lock Owner CAS(LockPlaceHolder{lockState=RELEASED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2}, null) => true 14:17:30,672 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629 invoking TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=-1, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName=test_cache} to recipient list null with options RpcOptions{timeout=15000, unit=MILLISECONDS, deliverOrder=NONE, responseFilter=null, responseMode=ASYNCHRONOUS} 14:17:30,672 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Try acquire. Next in queue=null. Current=null 14:17:30,672 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (testng-InfinispanNodeFailureTest) Topology id missing on command TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=-1, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName=test_cache} , setting it to 8 14:17:30,672 TRACE [org.infinispan.transaction.impl.AbstractCacheTransaction] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Clearing locked keys: [MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}] 14:17:30,672 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) dests=null, command=TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=8, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName=test_cache} , mode=ASYNCHRONOUS, timeout=15000 14:17:30,672 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (testng-InfinispanNodeFailureTest) Replication task sending TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=8, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName=test_cache} to single recipient InfinispanNodeFailureTest-NodeA-7443 with response mode GET_NONE 14:17:30,672 TRACE [org.infinispan.interceptors.impl.ComposedAsyncInvocationStage] (ForkThread-1,InfinispanNodeFailureTest) 54554E14 awaiting future java.util.concurrent.CompletableFuture@1a222d[Completed normally] finished with ReturnValueStage(null)=468C2D23 14:17:30,672 TRACE [org.infinispan.transaction.impl.TransactionTable] (ForkThread-1,InfinispanNodeFailureTest) Changing minimum topology ID from 5 to 8 14:17:30,672 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #18, conn_id=0) 14:17:30,672 TRACE [org.infinispan.transaction.impl.TransactionTable] (ForkThread-1,InfinispanNodeFailureTest) Removed LocalXaTransaction{xid=DummyXid{id=2}} LocalTransaction{remoteLockedNodes=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981], isMarkedForRollback=false, lockedKeys=[], backupKeyLocks=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], topologyId=5, stateTransferFlag=null} org.infinispan.transaction.xa.LocalXaTransaction@2 from transaction table. 14:17:30,672 TRACE [org.infinispan.transaction.impl.AbstractCacheTransaction] (ForkThread-1,InfinispanNodeFailureTest) Transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2 has completed, notifying listening threads. 14:17:30,672 TRACE [org.infinispan.transaction.impl.TransactionTable] (ForkThread-1,InfinispanNodeFailureTest) Committed in onePhase? true isOptimistic? false 14:17:30,672 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, UNICAST3: DATA, seqno=18, TP: [cluster_name=ISPN] 14:17:30,672 TRACE [org.infinispan.transaction.impl.LocalTransaction] (ForkThread-1,InfinispanNodeFailureTest) getCommitNodes recipients=null, currentTopologyId=8, members=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], txTopologyId=5 14:17:30,672 TRACE [org.infinispan.transaction.impl.TransactionTable] (ForkThread-1,InfinispanNodeFailureTest) About to invoke tx completion notification on commitNodes: null 14:17:30,672 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (testng-InfinispanNodeFailureTest) Response(s) to TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=8, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName=test_cache} is {} 14:17:30,672 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (ForkThread-1,InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443 invoking TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=-1, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName=test_cache} to recipient list null with options RpcOptions{timeout=15000, unit=MILLISECONDS, deliverOrder=NONE, responseFilter=null, responseMode=ASYNCHRONOUS} 14:17:30,673 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (ForkThread-1,InfinispanNodeFailureTest) Topology id missing on command TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=-1, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName=test_cache} , setting it to 8 14:17:30,673 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ForkThread-1,InfinispanNodeFailureTest) dests=null, command=TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=8, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName=test_cache} , mode=ASYNCHRONOUS, timeout=15000 14:17:30,673 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (ForkThread-1,InfinispanNodeFailureTest) Replication task sending TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=8, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName=test_cache} to single recipient InfinispanNodeFailureTest-NodeB-62629 with response mode GET_NONE 14:17:30,673 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (132 bytes (660.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,673 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (135 bytes) 14:17:30,673 TRACE [org.jgroups.protocols.UNICAST3] (ForkThread-1,InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeB-62629: #9, conn_id=1) 14:17:30,673 TRACE [org.jgroups.protocols.TCP_NIO2] (ForkThread-1,InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeB-62629, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, UNICAST3: DATA, seqno=9, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,673 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (ForkThread-1,InfinispanNodeFailureTest) Response(s) to TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=8, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName=test_cache} is {} 14:17:30,673 DEBUG [org.infinispan.tx.InfinispanNodeFailureTest] (ForkThread-1,InfinispanNodeFailureTest) Exiting fork callable. 14:17:30,673 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (132 bytes (660.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeB-62629] 14:17:30,673 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (135 bytes) 14:17:30,674 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=61 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, UNICAST3: DATA, seqno=18, TP: [cluster_name=ISPN] 14:17:30,674 TRACE [org.jgroups.protocols.UNICAST3] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #18, conn_id=0) 14:17:30,674 TRACE [org.jgroups.protocols.UNICAST3] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#18 14:17:30,674 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,674 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Attempting to execute CacheRpcCommand: TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=8, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName=test_cache} [sender=InfinispanNodeFailureTest-NodeB-62629] 14:17:30,674 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (testng-InfinispanNodeFailureTest) Invoked with command GetKeyValueCommand {key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, flags=[]} and InvocationContext [org.infinispan.context.SingleKeyNonTxInvocationContext@6a1a57b] 14:17:30,674 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Checking if transaction data was received for topology 8, current topology is 8 14:17:30,674 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,674 TRACE [org.infinispan.container.EntryFactoryImpl] (testng-InfinispanNodeFailureTest) Exists in context? null 14:17:30,674 TRACE [org.infinispan.remoting.inboundhandler.NonTotalOrderTxPerCacheInboundInvocationHandler] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Calling perform() on TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=8, gtx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, cacheName=test_cache} 14:17:30,674 TRACE [org.infinispan.commands.remote.recovery.TxCompletionNotificationCommand] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Processing completed transaction GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3 14:17:30,674 TRACE [org.infinispan.container.EntryFactoryImpl] (testng-InfinispanNodeFailureTest) Retrieved from container ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1} 14:17:30,674 TRACE [org.infinispan.container.EntryFactoryImpl] (testng-InfinispanNodeFailureTest) Wrap MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} for read. Entry=ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1} 14:17:30,674 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: InfinispanNodeFailureTest-NodeB-62629, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=61 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, UNICAST3: DATA, seqno=9, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,674 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (testng-InfinispanNodeFailureTest) Invoking: GetKeyValueCommand 14:17:30,674 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #9, conn_id=1) 14:17:30,674 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#9 14:17:30,674 TRACE [org.infinispan.transaction.impl.TransactionTable] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Removed remote transaction GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3 ? RemoteTransaction{modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], lookedUpEntries={MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}=ReadCommittedEntry(5dc63833){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1, isCreated=false, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}}}, lockedKeys=[], backupKeyLocks=[], lookedUpEntriesTopology=8, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, state=null} 14:17:30,674 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,674 TRACE [org.infinispan.transaction.impl.TransactionTable] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Removed RemoteTransaction{modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}], lookedUpEntries={MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}=ReadCommittedEntry(5dc63833){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1, isCreated=false, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}}}, lockedKeys=[], backupKeyLocks=[], lookedUpEntriesTopology=8, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3, state=null} from transaction table. 14:17:30,674 TRACE [org.infinispan.transaction.impl.AbstractCacheTransaction] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Transaction GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3 has completed, notifying listening threads. 14:17:30,674 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute CacheRpcCommand: TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=8, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName=test_cache} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,674 TRACE [org.infinispan.transaction.impl.TransactionTable] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Marking transaction GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3 as completed 14:17:30,674 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Checking if transaction data was received for topology 8, current topology is 8 14:17:30,674 TRACE [org.infinispan.commands.remote.recovery.TxCompletionNotificationCommand] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Invoking forward of TxCompletionNotification for transaction GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3. Affected keys: [] 14:17:30,674 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,674 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) CommandTopologyId=8, localTopologyId=8 14:17:30,674 TRACE [org.infinispan.remoting.inboundhandler.NonTotalOrderTxPerCacheInboundInvocationHandler] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Calling perform() on TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=8, gtx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, cacheName=test_cache} 14:17:30,674 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultLockManager] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Release locks for keys=[]. owner=GlobalTx:InfinispanNodeFailureTest-NodeB-62629:3 14:17:30,674 TRACE [org.infinispan.commands.remote.recovery.TxCompletionNotificationCommand] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Processing completed transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2 14:17:30,674 TRACE [org.infinispan.transaction.impl.TransactionTable] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Removed remote transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2 ? RemoteTransaction{modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}, PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], lookedUpEntries={MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}=ReadCommittedEntry(446bb78f){key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, isCreated=true, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}}, MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}=ReadCommittedEntry(15036738){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1, isCreated=false, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}}}, lockedKeys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], backupKeyLocks=[], lookedUpEntriesTopology=8, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, state=null} 14:17:30,675 TRACE [org.infinispan.transaction.impl.TransactionTable] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Changing minimum topology ID from 5 to 8 14:17:30,675 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (testng-InfinispanNodeFailureTest) Invoked with command GetKeyValueCommand {key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, flags=[]} and InvocationContext [org.infinispan.context.SingleKeyNonTxInvocationContext@3e8589c8] 14:17:30,675 TRACE [org.infinispan.container.EntryFactoryImpl] (testng-InfinispanNodeFailureTest) Exists in context? null 14:17:30,675 TRACE [org.infinispan.container.EntryFactoryImpl] (testng-InfinispanNodeFailureTest) Retrieved from container ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1} 14:17:30,675 TRACE [org.infinispan.transaction.impl.TransactionTable] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Removed RemoteTransaction{modifications=[ReplaceCommand{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, oldValue=0, newValue=1, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, flags=[], successful=true, valueMatcher=MATCH_ALWAYS}, PutKeyValueCommand{key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, flags=[], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true}], lookedUpEntries={MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}=ReadCommittedEntry(446bb78f){key=MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}, value=some-value, isCreated=true, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}}, MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}=ReadCommittedEntry(15036738){key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1, isCreated=false, isChanged=true, isRemoved=false, isValid=true, isExpired=false, skipLookup=false, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}}}, lockedKeys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}], backupKeyLocks=[], lookedUpEntriesTopology=8, isMarkedForRollback=false, tx=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2, state=null} from transaction table. 14:17:30,675 TRACE [org.infinispan.transaction.impl.AbstractCacheTransaction] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2 has completed, notifying listening threads. 14:17:30,675 TRACE [org.infinispan.container.EntryFactoryImpl] (testng-InfinispanNodeFailureTest) Wrap MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443} for read. Entry=ImmortalCacheEntry{key=MagicKey#X{0/31EB60CD/44@InfinispanNodeFailureTest-NodeA-7443}, value=1} 14:17:30,675 TRACE [org.infinispan.transaction.impl.TransactionTable] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Marking transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2 as completed 14:17:30,675 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (testng-InfinispanNodeFailureTest) Invoking: GetKeyValueCommand 14:17:30,675 TRACE [org.infinispan.commands.remote.recovery.TxCompletionNotificationCommand] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Invoking forward of TxCompletionNotification for transaction GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2. Affected keys: [] 14:17:30,675 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) CommandTopologyId=8, localTopologyId=8 14:17:30,675 TRACE [org.infinispan.util.concurrent.locks.impl.DefaultLockManager] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Release locks for keys=[MagicKey#Z{1/7ECB61B7/3@InfinispanNodeFailureTest-NodeB-62629}]. owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2 14:17:30,675 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Release lock for GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2. 14:17:30,675 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) State changed for LockPlaceHolder{lockState=RELEASED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2}. ACQUIRED => RELEASED 14:17:30,675 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Release lock for LockPlaceHolder{lockState=RELEASED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2}? true 14:17:30,675 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Try acquire. Next in queue=null. Current=LockPlaceHolder{lockState=RELEASED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2} 14:17:30,675 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Lock Owner CAS(LockPlaceHolder{lockState=RELEASED, owner=GlobalTx:InfinispanNodeFailureTest-NodeA-7443:2}, null) => true 14:17:30,675 TRACE [org.infinispan.util.concurrent.locks.impl.InfinispanLock] (remote-thread-InfinispanNodeFailureTest-NodeB-p6-t5) Try acquire. Next in queue=null. Current=null [TestSuiteProgress] Test failed: org.infinispan.tx.InfinispanNodeFailureTest.killedNodeDoesNotBreakReplaceCommand 14:17:30,675 ERROR [org.infinispan.commons.test.TestSuiteProgress] (testng-InfinispanNodeFailureTest) Test failed: org.infinispan.tx.InfinispanNodeFailureTest.killedNodeDoesNotBreakReplaceCommand java.lang.AssertionError: expected: but was: at org.testng.AssertJUnit.fail(AssertJUnit.java:59) ~[testng-6.8.8.jar:?] at org.testng.AssertJUnit.failNotEquals(AssertJUnit.java:364) ~[testng-6.8.8.jar:?] at org.testng.AssertJUnit.assertEquals(AssertJUnit.java:80) ~[testng-6.8.8.jar:?] at org.testng.AssertJUnit.assertEquals(AssertJUnit.java:185) ~[testng-6.8.8.jar:?] at org.testng.AssertJUnit.assertEquals(AssertJUnit.java:192) ~[testng-6.8.8.jar:?] at org.infinispan.tx.InfinispanNodeFailureTest.killedNodeDoesNotBreakReplaceCommand(InfinispanNodeFailureTest.java:135) ~[test-classes/:?] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_77] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_77] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_77] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_77] at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84) ~[testng-6.8.8.jar:?] at org.testng.internal.Invoker.invokeMethod(Invoker.java:714) [testng-6.8.8.jar:?] at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901) [testng-6.8.8.jar:?] at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231) [testng-6.8.8.jar:?] at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127) [testng-6.8.8.jar:?] at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111) [testng-6.8.8.jar:?] at org.testng.TestRunner.privateRun(TestRunner.java:767) [testng-6.8.8.jar:?] at org.testng.TestRunner.run(TestRunner.java:617) [testng-6.8.8.jar:?] at org.testng.SuiteRunner.runTest(SuiteRunner.java:348) [testng-6.8.8.jar:?] at org.testng.SuiteRunner.access$000(SuiteRunner.java:38) [testng-6.8.8.jar:?] at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382) [testng-6.8.8.jar:?] at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64) [testng-6.8.8.jar:?] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_77] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_77] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_77] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77] [TestSuiteProgress] Tests succeeded: 0, failed: 1, skipped: 0 14:17:30,681 DEBUG [org.infinispan.commons.test.TestNGTestListener] (testng-InfinispanNodeFailureTest) Before setup clearContent 14:17:30,681 DEBUG [org.infinispan.tx.InfinispanNodeFailureTest] (testng-InfinispanNodeFailureTest) *** Test method complete; clearing contents on all caches. 14:17:30,685 DEBUG [org.infinispan.test.TestingUtil] (testng-InfinispanNodeFailureTest) Cleaning data for cache 'test_cache' on a cache manager at address InfinispanNodeFailureTest-NodeA-7443 14:17:30,685 DEBUG [org.infinispan.test.TestingUtil] (testng-InfinispanNodeFailureTest) Data container size before clear: 2 14:17:30,686 TRACE [org.infinispan.container.DefaultDataContainer] (testng-InfinispanNodeFailureTest) Clearing data container 14:17:30,686 DEBUG [org.infinispan.test.TestingUtil] (testng-InfinispanNodeFailureTest) Cleaning data for cache 'test_cache' on a cache manager at address InfinispanNodeFailureTest-NodeB-62629 14:17:30,686 DEBUG [org.infinispan.test.TestingUtil] (testng-InfinispanNodeFailureTest) Data container size before clear: 2 14:17:30,686 TRACE [org.infinispan.container.DefaultDataContainer] (testng-InfinispanNodeFailureTest) Clearing data container 14:17:30,686 DEBUG [org.infinispan.commons.test.TestNGTestListener] (testng-InfinispanNodeFailureTest) After setup clearContent 14:17:30,686 DEBUG [org.infinispan.commons.test.TestNGTestListener] (testng-InfinispanNodeFailureTest) Before setup destroy 14:17:30,687 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (testng-InfinispanNodeFailureTest) Invoked with command SizeCommand{} and InvocationContext [org.infinispan.context.impl.NonTxInvocationContext@56b7de7c] 14:17:30,687 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (testng-InfinispanNodeFailureTest) Invoking: SizeCommand 14:17:30,687 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (testng-InfinispanNodeFailureTest) Invoked with command KeySetCommand{cache=test_cache} and InvocationContext [org.infinispan.context.impl.NonTxInvocationContext@32ad5768] 14:17:30,687 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (testng-InfinispanNodeFailureTest) Invoking: KeySetCommand 14:17:30,691 TRACE [org.jgroups.protocols.pbcast.STABLE] (Timer-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending stable msg to InfinispanNodeFailureTest-NodeA-7443: InfinispanNodeFailureTest-NodeA-7443: [19], InfinispanNodeFailureTest-NodeB-62629: [1] 14:17:30,692 TRACE [org.jgroups.protocols.TCP_NIO2] (Timer-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeA-7443, headers are STABLE: [STABLE_GOSSIP] view-id= [InfinispanNodeFailureTest-NodeA-7443|3], TP: [cluster_name=ISPN] 14:17:30,692 TRACE [org.jgroups.protocols.TCP_NIO2] (Timer-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: looping back message [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeA-7443 (2 headers), size=43 bytes, flags=OOB|NO_RELIABILITY|INTERNAL] 14:17:30,692 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeA-7443 (2 headers), size=43 bytes, flags=OOB|NO_RELIABILITY|INTERNAL], headers are STABLE: [STABLE_GOSSIP] view-id= [InfinispanNodeFailureTest-NodeA-7443|3], TP: [cluster_name=ISPN] 14:17:30,692 TRACE [org.jgroups.protocols.pbcast.STABLE] (INT-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: handling digest from InfinispanNodeFailureTest-NodeA-7443: mine: InfinispanNodeFailureTest-NodeA-7443: [-1], InfinispanNodeFailureTest-NodeB-62629: [-1] other: InfinispanNodeFailureTest-NodeA-7443: [19], InfinispanNodeFailureTest-NodeB-62629: [1] result: InfinispanNodeFailureTest-NodeA-7443: [19], InfinispanNodeFailureTest-NodeB-62629: [1] 14:17:30,695 TRACE [org.infinispan.test.TestingUtil] (testng-InfinispanNodeFailureTest) Local size on InfinispanNodeFailureTest-NodeA-7443 before stopping: 0 14:17:30,695 DEBUG [org.infinispan.cache.impl.CacheImpl] (testng-InfinispanNodeFailureTest) Stopping cache test_cache on InfinispanNodeFailureTest-NodeA-7443 14:17:30,695 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (testng-InfinispanNodeFailureTest) Shutting down StateTransferManager of cache test_cache on node InfinispanNodeFailureTest-NodeA-7443 14:17:30,695 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Node InfinispanNodeFailureTest-NodeA-7443 leaving cache test_cache 14:17:30,695 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Attempting to execute command on self: CacheTopologyControlCommand{cache=test_cache, type=LEAVE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=3} 14:17:30,695 TRACE [org.infinispan.topology.ClusterCacheStatus] (testng-InfinispanNodeFailureTest) Removed node InfinispanNodeFailureTest-NodeA-7443 from cache test_cache: members = [InfinispanNodeFailureTest-NodeB-62629], joiners = [] 14:17:30,696 TRACE [org.infinispan.topology.ClusterCacheStatus] (testng-InfinispanNodeFailureTest) Cache test_cache topology updated: CacheTopology{id=9, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeB-62629: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[a61656c9-925c-4276-a2fd-582b76ae2bf3]}, members = [InfinispanNodeFailureTest-NodeB-62629], joiners = [] 14:17:30,696 TRACE [org.infinispan.topology.CacheTopology] (testng-InfinispanNodeFailureTest) Current consistent hash's routing table: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 14:17:30,696 DEBUG [org.infinispan.topology.ClusterTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Updating cluster-wide current topology for cache test_cache, topology = CacheTopology{id=9, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeB-62629: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[a61656c9-925c-4276-a2fd-582b76ae2bf3]}, availability mode = AVAILABLE 14:17:30,696 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) dests=null, command=CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=9, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeB-62629: 60]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=3}, mode=ASYNCHRONOUS, timeout=240000 14:17:30,696 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Attempting to execute command on self: CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=9, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeB-62629: 60]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=3} 14:17:30,696 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (testng-InfinispanNodeFailureTest) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=9, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeB-62629: 60]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=3} to single recipient InfinispanNodeFailureTest-NodeB-62629 with response mode GET_NONE 14:17:30,696 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t4) Ignoring consistent hash update 9 for cache test_cache that doesn't exist locally 14:17:30,696 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeB-62629: #10, conn_id=1) 14:17:30,697 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeB-62629, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, UNICAST3: DATA, seqno=10, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,697 DEBUG [org.infinispan.topology.ClusterCacheStatus] (testng-InfinispanNodeFailureTest) Queueing rebalance for cache test_cache with members [InfinispanNodeFailureTest-NodeB-62629] 14:17:30,697 TRACE [org.infinispan.topology.ClusterCacheStatus] (testng-InfinispanNodeFailureTest) Rebalancing consistent hash for cache test_cache, members are [InfinispanNodeFailureTest-NodeB-62629] 14:17:30,697 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (582 bytes (2910.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeB-62629] 14:17:30,697 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (585 bytes) 14:17:30,697 TRACE [org.infinispan.topology.ClusterCacheStatus] (testng-InfinispanNodeFailureTest) The balanced CH is the same as the current CH, not rebalancing 14:17:30,697 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: InfinispanNodeFailureTest-NodeB-62629, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=511 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=0, rsp_expected=false, UNICAST3: DATA, seqno=10, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,697 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (testng-InfinispanNodeFailureTest) Shutting down StateConsumer of cache test_cache on node InfinispanNodeFailureTest-NodeA-7443 14:17:30,697 TRACE [org.infinispan.statetransfer.StateProviderImpl] (testng-InfinispanNodeFailureTest) Shutting down StateProvider of cache test_cache on node InfinispanNodeFailureTest-NodeA-7443 14:17:30,697 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #10, conn_id=1) 14:17:30,697 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#10 14:17:30,697 TRACE [org.infinispan.jmx.ComponentsJmxRegistration] (testng-InfinispanNodeFailureTest) Unregistering jmx resources.. 14:17:30,697 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 0 14:17:30,697 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute passivations [r=true,w=false,is=false,type=long] 14:17:30,697 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute readWriteRatio [r=true,w=false,is=false,type=double] 14:17:30,697 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hits [r=true,w=false,is=false,type=long] 14:17:30,697 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheLoaderLoads [r=true,w=false,is=false,type=long] 14:17:30,697 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute misses [r=true,w=false,is=false,type=long] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksAvailable [r=true,w=false,is=false,type=int] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stores [r=true,w=false,is=false,type=long] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceStart [r=true,w=false,is=false,type=long] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReadTime [r=true,w=false,is=false,type=long] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictions [r=true,w=false,is=false,type=long] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hitRatio [r=true,w=false,is=false,type=double] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeHits [r=true,w=false,is=false,type=long] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksHeld [r=true,w=false,is=false,type=int] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeMisses [r=true,w=false,is=false,type=long] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceReset [r=true,w=false,is=false,type=long] 14:17:30,698 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=CH_UPDATE, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=9, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeB-62629: 60]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[InfinispanNodeFailureTest-NodeB-62629], throwable=null, viewId=3} [sender=InfinispanNodeFailureTest-NodeA-7443] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute invalidations [r=true,w=false,is=false,type=long] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageWriteTime [r=true,w=false,is=false,type=long] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheLoaderMisses [r=true,w=false,is=false,type=long] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute storeWrites [r=true,w=false,is=false,type=long] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfEntries [r=true,w=false,is=false,type=int] 14:17:30,698 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Added a new task directly: 0 task(s) are waiting 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageRemoveTime [r=true,w=false,is=false,type=long] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute activations [r=true,w=false,is=false,type=long] 14:17:30,698 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Waiting on view 3 being accepted 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void setStaleStatsTreshold 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=false,type=boolean] 14:17:30,698 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Updating local topology for cache test_cache: CacheTopology{id=9, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeB-62629: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute activations [r=true,w=false,is=false,type=java.lang.String] 14:17:30,698 TRACE [org.infinispan.topology.CacheTopology] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Current consistent hash's routing table: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute joinComplete [r=true,w=false,is=true,type=boolean] 14:17:30,698 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Installing new cache topology CacheTopology{id=9, rebalanceId=4, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeB-62629: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[a61656c9-925c-4276-a2fd-582b76ae2bf3]} on cache test_cache 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stateTransferInProgress [r=true,w=false,is=true,type=boolean] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rebalancingStatus [r=true,w=false,is=false,type=java.lang.String] 14:17:30,698 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Received new topology for cache test_cache, isRebalance = false, isMember = true, topology = CacheTopology{id=9, rebalanceId=4, currentCH=PartitionerConsistentHash:ReplicatedConsistentHash{ns = 60, owners = (1)[InfinispanNodeFailureTest-NodeB-62629: 60]}, pendingCH=null, unionCH=null, actualMembers=[InfinispanNodeFailureTest-NodeB-62629], persistentUUIDs=[a61656c9-925c-4276-a2fd-582b76ae2bf3]} 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute replicationFailures [r=true,w=false,is=false,type=long] 14:17:30,698 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Signalling topology 9 is installed 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute committedViewAsString [r=true,w=false,is=false,type=java.lang.String] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReplicationTime [r=true,w=false,is=false,type=long] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute successRatioFloatingPoint [r=true,w=false,is=false,type=double] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute replicationCount [r=true,w=false,is=false,type=long] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute successRatio [r=true,w=false,is=false,type=java.lang.String] 14:17:30,698 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) On cache test_cache we have: new segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]; old segments: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute pendingViewAsString [r=true,w=false,is=false,type=java.lang.String] 14:17:30,698 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) On cache test_cache we have: added segments: []; removed segments: [] 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,698 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void setStatisticsEnabled 14:17:30,698 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Topology update processed, stateTransferTopologyId = -1, startRebalance = false, pending CH = null 14:17:30,698 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Signalling transaction data received for topology 9 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation boolean isLocatedLocally 14:17:30,699 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Checking for transactions originated on leavers. Current cache members are [InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], remote transactions: 0 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.util.List locateKey 14:17:30,699 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) No remote transactions pertain to originator(s) who have left the cluster. 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation boolean isAffectedByRehash 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rollbacks [r=true,w=false,is=false,type=long] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute prepares [r=true,w=false,is=false,type=long] 14:17:30,699 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Removing no longer owned entries for cache test_cache 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,699 DEBUG [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Topology changed, recalculating minTopologyId 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute commits [r=true,w=false,is=false,type=long] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,699 TRACE [org.infinispan.transaction.impl.TransactionTable] (transport-thread-InfinispanNodeFailureTest-NodeB-p8-t6) Changing minimum topology ID from 8 to 9 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation long synchronizeData 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void disconnectSource 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void recordKnownGlobalKeyset 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation long synchronizeData 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute passivations [r=true,w=false,is=false,type=long] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute concurrencyLevel [r=true,w=false,is=false,type=int] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksAvailable [r=true,w=false,is=false,type=int] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksHeld [r=true,w=false,is=false,type=int] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictionSize [r=true,w=true,is=false,type=long] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute elapsedTime [r=true,w=false,is=false,type=long] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hitRatio [r=true,w=false,is=false,type=double] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=false,type=boolean] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute readWriteRatio [r=true,w=false,is=false,type=double] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfEntries [r=true,w=false,is=false,type=int] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceStart [r=true,w=false,is=false,type=long] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeMisses [r=true,w=false,is=false,type=long] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute misses [r=true,w=false,is=false,type=long] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stores [r=true,w=false,is=false,type=long] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceReset [r=true,w=false,is=false,type=long] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hits [r=true,w=false,is=false,type=long] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageWriteTime [r=true,w=false,is=false,type=long] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageRemoveTime [r=true,w=false,is=false,type=long] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeHits [r=true,w=false,is=false,type=long] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictions [r=true,w=false,is=false,type=long] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReadTime [r=true,w=false,is=false,type=long] 14:17:30,699 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,699 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=ClusterCacheStats 14:17:30,699 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Activation 14:17:30,699 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=StateTransferManager 14:17:30,699 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=RpcManager 14:17:30,699 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=DistributionManager 14:17:30,699 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Transactions 14:17:30,699 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=RollingUpgradeManager 14:17:30,699 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Passivation 14:17:30,699 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=LockManager 14:17:30,700 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Configuration 14:17:30,700 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Statistics 14:17:30,700 DEBUG [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Wait for on-going transactions to finish for 0 milliseconds. 14:17:30,700 DEBUG [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) All transactions terminated 14:17:30,700 TRACE [org.infinispan.container.DefaultDataContainer] (testng-InfinispanNodeFailureTest) Clearing data container 14:17:30,700 DEBUG [org.infinispan.xsite.BackupReceiverRepositoryImpl] (testng-InfinispanNodeFailureTest) Processing cache stop: EventImpl{type=CACHE_STOPPED, newMembers=null, oldMembers=null, localAddress=null, viewId=0, subgroupsMerged=null, mergeView=false}. Cache name: 'test_cache' 14:17:30,700 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (testng-InfinispanNodeFailureTest) Invoked with command SizeCommand{} and InvocationContext [org.infinispan.context.impl.NonTxInvocationContext@5782dcec] 14:17:30,700 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (testng-InfinispanNodeFailureTest) Invoking: SizeCommand 14:17:30,700 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (testng-InfinispanNodeFailureTest) Invoked with command KeySetCommand{cache=test_cache} and InvocationContext [org.infinispan.context.impl.NonTxInvocationContext@39f32d86] 14:17:30,700 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (testng-InfinispanNodeFailureTest) Invoking: KeySetCommand 14:17:30,701 TRACE [org.infinispan.test.TestingUtil] (testng-InfinispanNodeFailureTest) Local size on InfinispanNodeFailureTest-NodeB-62629 before stopping: 0 14:17:30,701 DEBUG [org.infinispan.cache.impl.CacheImpl] (testng-InfinispanNodeFailureTest) Stopping cache test_cache on InfinispanNodeFailureTest-NodeB-62629 14:17:30,701 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (testng-InfinispanNodeFailureTest) Shutting down StateTransferManager of cache test_cache on node InfinispanNodeFailureTest-NodeB-62629 14:17:30,701 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Node InfinispanNodeFailureTest-NodeB-62629 leaving cache test_cache 14:17:30,701 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) dests=[InfinispanNodeFailureTest-NodeA-7443], command=CacheTopologyControlCommand{cache=test_cache, type=LEAVE, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=3}, mode=SYNCHRONOUS, timeout=240000 14:17:30,701 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (testng-InfinispanNodeFailureTest) Replication task sending CacheTopologyControlCommand{cache=test_cache, type=LEAVE, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=3} to single recipient InfinispanNodeFailureTest-NodeA-7443 with response mode GET_ALL 14:17:30,701 TRACE [org.jgroups.blocks.RequestCorrelator] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: invoking unicast RPC [req-id=18] on InfinispanNodeFailureTest-NodeA-7443 14:17:30,701 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #19, conn_id=0) 14:17:30,701 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=18, rsp_expected=true, UNICAST3: DATA, seqno=19, TP: [cluster_name=ISPN] 14:17:30,701 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (118 bytes (590.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,701 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (121 bytes) 14:17:30,701 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=46 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=18, rsp_expected=true, UNICAST3: DATA, seqno=19, TP: [cluster_name=ISPN] 14:17:30,701 TRACE [org.jgroups.protocols.UNICAST3] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #19, conn_id=0) 14:17:30,702 TRACE [org.jgroups.protocols.UNICAST3] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#19 14:17:30,702 TRACE [org.jgroups.blocks.RequestCorrelator] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 18 14:17:30,702 TRACE [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Attempting to execute non-CacheRpcCommand: CacheTopologyControlCommand{cache=test_cache, type=LEAVE, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=3} [sender=InfinispanNodeFailureTest-NodeB-62629] 14:17:30,702 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,702 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Removed node InfinispanNodeFailureTest-NodeB-62629 from cache test_cache: members = [], joiners = [] 14:17:30,702 DEBUG [org.infinispan.partitionhandling.impl.PreferAvailabilityStrategy] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) The last node of cache test_cache left 14:17:30,702 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Cache test_cache no longer has any members, removing topology 14:17:30,702 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Cache test_cache topology updated: null, members = [], joiners = [] 14:17:30,702 TRACE [org.infinispan.topology.ClusterCacheStatus] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) Cache test_cache stable topology updated: members = [], joiners = [], topology = null 14:17:30,702 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) About to send back response SuccessfulResponse{responseValue=null} for command CacheTopologyControlCommand{cache=test_cache, type=LEAVE, sender=InfinispanNodeFailureTest-NodeB-62629, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=3} 14:17:30,702 TRACE [org.jgroups.blocks.RequestCorrelator] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) sending rsp for 18 to InfinispanNodeFailureTest-NodeB-62629 14:17:30,702 TRACE [org.jgroups.protocols.UNICAST3] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeB-62629: #11, conn_id=1) 14:17:30,702 TRACE [org.jgroups.protocols.TCP_NIO2] (remote-thread-InfinispanNodeFailureTest-NodeA-p2-t5) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeB-62629, src=InfinispanNodeFailureTest-NodeA-7443, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=18, rsp_expected=true, UNICAST3: DATA, seqno=11, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,702 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (76 bytes (380.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeB-62629] 14:17:30,702 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (79 bytes) 14:17:30,702 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: InfinispanNodeFailureTest-NodeB-62629, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=4 bytes, flags=OOB|DONT_BUNDLE|NO_FC|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=18, rsp_expected=true, UNICAST3: DATA, seqno=11, conn_id=1, TP: [cluster_name=ISPN] 14:17:30,702 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #11, conn_id=1) 14:17:30,702 TRACE [org.jgroups.protocols.UNICAST3] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: delivering InfinispanNodeFailureTest-NodeA-7443#11 14:17:30,703 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (OOB-2,InfinispanNodeFailureTest-NodeB-62629) Responses: sender=InfinispanNodeFailureTest-NodeA-7443value=SuccessfulResponse{responseValue=null} , received=true, suspected=false 14:17:30,703 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (testng-InfinispanNodeFailureTest) Shutting down StateConsumer of cache test_cache on node InfinispanNodeFailureTest-NodeB-62629 14:17:30,703 TRACE [org.infinispan.statetransfer.StateProviderImpl] (testng-InfinispanNodeFailureTest) Shutting down StateProvider of cache test_cache on node InfinispanNodeFailureTest-NodeB-62629 14:17:30,703 TRACE [org.infinispan.jmx.ComponentsJmxRegistration] (testng-InfinispanNodeFailureTest) Unregistering jmx resources.. 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeHits [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=false,type=boolean] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictions [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReadTime [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hitRatio [r=true,w=false,is=false,type=double] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stores [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hits [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageRemoveTime [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfEntries [r=true,w=false,is=false,type=int] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageWriteTime [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute elapsedTime [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute readWriteRatio [r=true,w=false,is=false,type=double] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceReset [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceStart [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute misses [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeMisses [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeHits [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute activations [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute storeWrites [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceReset [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictions [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute misses [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeMisses [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageWriteTime [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageRemoveTime [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute readWriteRatio [r=true,w=false,is=false,type=double] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute invalidations [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfEntries [r=true,w=false,is=false,type=int] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheLoaderMisses [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheLoaderLoads [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hits [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute passivations [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stores [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hitRatio [r=true,w=false,is=false,type=double] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksHeld [r=true,w=false,is=false,type=int] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceStart [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksAvailable [r=true,w=false,is=false,type=int] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReadTime [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void setStaleStatsTreshold 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute commits [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute prepares [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rollbacks [r=true,w=false,is=false,type=long] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute joinComplete [r=true,w=false,is=true,type=boolean] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rebalancingStatus [r=true,w=false,is=false,type=java.lang.String] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stateTransferInProgress [r=true,w=false,is=true,type=boolean] 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation boolean isAffectedByRehash 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.util.List locateKey 14:17:30,703 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation boolean isLocatedLocally 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation long synchronizeData 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void recordKnownGlobalKeyset 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation long synchronizeData 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void disconnectSource 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute passivations [r=true,w=false,is=false,type=long] 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictionSize [r=true,w=true,is=false,type=long] 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute activations [r=true,w=false,is=false,type=java.lang.String] 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=false,type=boolean] 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksAvailable [r=true,w=false,is=false,type=int] 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute concurrencyLevel [r=true,w=false,is=false,type=int] 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfLocksHeld [r=true,w=false,is=false,type=int] 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute committedViewAsString [r=true,w=false,is=false,type=java.lang.String] 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute successRatioFloatingPoint [r=true,w=false,is=false,type=double] 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute pendingViewAsString [r=true,w=false,is=false,type=java.lang.String] 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute successRatio [r=true,w=false,is=false,type=java.lang.String] 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReplicationTime [r=true,w=false,is=false,type=long] 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute replicationFailures [r=true,w=false,is=false,type=long] 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute replicationCount [r=true,w=false,is=false,type=long] 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void resetStatistics 14:17:30,704 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void setStatisticsEnabled 14:17:30,704 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Statistics 14:17:30,704 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=ClusterCacheStats 14:17:30,704 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Transactions 14:17:30,704 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=StateTransferManager 14:17:30,704 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=DistributionManager 14:17:30,704 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=RollingUpgradeManager 14:17:30,704 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Passivation 14:17:30,704 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Configuration 14:17:30,704 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=Activation 14:17:30,704 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=LockManager 14:17:30,704 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=Cache,name="test_cache(repl_sync)",manager="DefaultCacheManager",component=RpcManager 14:17:30,704 DEBUG [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) Wait for on-going transactions to finish for 0 milliseconds. 14:17:30,704 DEBUG [org.infinispan.transaction.impl.TransactionTable] (testng-InfinispanNodeFailureTest) All transactions terminated 14:17:30,705 TRACE [org.infinispan.container.DefaultDataContainer] (testng-InfinispanNodeFailureTest) Clearing data container 14:17:30,705 DEBUG [org.infinispan.xsite.BackupReceiverRepositoryImpl] (testng-InfinispanNodeFailureTest) Processing cache stop: EventImpl{type=CACHE_STOPPED, newMembers=null, oldMembers=null, localAddress=null, viewId=0, subgroupsMerged=null, mergeView=false}. Cache name: 'test_cache' 14:17:30,705 DEBUG [org.infinispan.manager.DefaultCacheManager] (testng-InfinispanNodeFailureTest) Stopping cache manager ISPN on InfinispanNodeFailureTest-NodeB-62629 14:17:30,705 TRACE [org.infinispan.manager.DefaultCacheManager] (testng-InfinispanNodeFailureTest) Cache stop order: [test_cache] 14:17:30,705 TRACE [org.infinispan.manager.DefaultCacheManager] (testng-InfinispanNodeFailureTest) Ignoring cache test_cache, it is already terminated. 14:17:30,705 TRACE [org.infinispan.jmx.ComponentsJmxRegistration] (testng-InfinispanNodeFailureTest) Unregistering jmx resources.. 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String pushState 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String bringSiteOnline 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String cancelPushState 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String takeSiteOffline 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterAvailability [r=true,w=false,is=false,type=java.lang.String] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rebalancingEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterName [r=true,w=false,is=false,type=java.lang.String] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute definedCacheNames [r=true,w=false,is=false,type=java.lang.String] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute definedCacheCount [r=true,w=false,is=false,type=java.lang.String] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute coordinatorAddress [r=true,w=false,is=false,type=java.lang.String] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterMembers [r=true,w=false,is=false,type=java.lang.String] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute name [r=true,w=false,is=false,type=java.lang.String] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute createdCacheCount [r=true,w=false,is=false,type=java.lang.String] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute globalConfigurationAsProperties [r=true,w=false,is=false,type=java.util.Properties] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheManagerStatus [r=true,w=false,is=false,type=java.lang.String] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterSize [r=true,w=false,is=false,type=int] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute coordinator [r=true,w=false,is=true,type=boolean] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute version [r=true,w=false,is=false,type=java.lang.String] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute runningCacheCount [r=true,w=false,is=false,type=java.lang.String] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute nodeAddress [r=true,w=false,is=false,type=java.lang.String] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute physicalAddresses [r=true,w=false,is=false,type=java.lang.String] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute definedCacheConfigurationNames [r=true,w=false,is=false,type=java.lang.String] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void startCache 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void startCache 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageWriteTime [r=true,w=false,is=false,type=long] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictions [r=true,w=false,is=false,type=long] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceReset [r=true,w=false,is=false,type=long] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute readWriteRatio [r=true,w=false,is=false,type=double] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeMisses [r=true,w=false,is=false,type=long] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfEntries [r=true,w=false,is=false,type=int] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute misses [r=true,w=false,is=false,type=long] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stores [r=true,w=false,is=false,type=long] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeHits [r=true,w=false,is=false,type=long] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hitRatio [r=true,w=false,is=false,type=double] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hits [r=true,w=false,is=false,type=long] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReadTime [r=true,w=false,is=false,type=long] 14:17:30,705 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageRemoveTime [r=true,w=false,is=false,type=long] 14:17:30,705 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=CacheManager,name="DefaultCacheManager",component=GlobalXSiteAdminOperations 14:17:30,705 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=CacheManager,name="DefaultCacheManager",component=LocalTopologyManager 14:17:30,705 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=CacheManager,name="DefaultCacheManager",component=CacheManager 14:17:30,705 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-4b127b69-15ae-4b16-96ff-51435e86465b:type=CacheManager,name="DefaultCacheManager",component=CacheContainerStats 14:17:30,706 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Stopping LocalTopologyManager on InfinispanNodeFailureTest-NodeB-62629 14:17:30,706 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) ISPN000080: Disconnecting JGroups channel ISPN 14:17:30,706 TRACE [org.jgroups.protocols.pbcast.GMS] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: sending LEAVE request to InfinispanNodeFailureTest-NodeA-7443 14:17:30,706 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #20, conn_id=0) 14:17:30,706 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are GMS: GmsHeader[LEAVE_REQ]: mbr=InfinispanNodeFailureTest-NodeB-62629, UNICAST3: DATA, seqno=20, TP: [cluster_name=ISPN] 14:17:30,706 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (83 bytes (415.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,706 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (86 bytes) 14:17:30,706 TRACE [org.jgroups.protocols.TCP_NIO2] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeB-62629 (3 headers), size=0 bytes, flags=OOB], headers are GMS: GmsHeader[LEAVE_REQ]: mbr=InfinispanNodeFailureTest-NodeB-62629, UNICAST3: DATA, seqno=20, TP: [cluster_name=ISPN] 14:17:30,706 TRACE [org.jgroups.protocols.UNICAST3] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeB-62629: #20, conn_id=0) 14:17:30,706 TRACE [org.jgroups.protocols.UNICAST3] (OOB-3,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeB-62629#20 14:17:30,706 TRACE [org.jgroups.protocols.pbcast.GMS] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: joiners=[], suspected=[], leaving=[InfinispanNodeFailureTest-NodeB-62629], new view: [InfinispanNodeFailureTest-NodeA-7443|4] (1) [InfinispanNodeFailureTest-NodeA-7443] 14:17:30,706 TRACE [org.jgroups.protocols.pbcast.GMS] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending LEAVE response to InfinispanNodeFailureTest-NodeB-62629 14:17:30,706 TRACE [org.jgroups.protocols.TCP_NIO2] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeB-62629, src=InfinispanNodeFailureTest-NodeA-7443, headers are GMS: GmsHeader[LEAVE_RSP], TP: [cluster_name=ISPN] 14:17:30,707 TRACE [org.jgroups.protocols.pbcast.GMS] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: mcasting view [InfinispanNodeFailureTest-NodeA-7443|4] (1) [InfinispanNodeFailureTest-NodeA-7443] (1 mbrs) 14:17:30,707 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (57 bytes (285.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeB-62629] 14:17:30,707 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (60 bytes) 14:17:30,707 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending InfinispanNodeFailureTest-NodeA-7443#20 14:17:30,707 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) 127.0.0.1:7900: connecting to 127.0.0.1:7901 14:17:30,707 TRACE [org.jgroups.protocols.TCP_NIO2] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending msg to null, src=InfinispanNodeFailureTest-NodeA-7443, headers are GMS: GmsHeader[VIEW], NAKACK2: [MSG, seqno=20], TP: [cluster_name=ISPN] 14:17:30,707 TRACE [org.jgroups.protocols.TCP_NIO2] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: looping back message [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=61 bytes] 14:17:30,707 TRACE [org.jgroups.protocols.TCP_NIO2] (NioConnection.Reader [127.0.0.1:7900],InfinispanNodeFailureTest-NodeB-62629) 127.0.0.1:7901: removed connection to 127.0.0.1:7900 14:17:30,707 TRACE [org.jgroups.protocols.TCP_NIO2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: , src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=61 bytes], headers are GMS: GmsHeader[VIEW], NAKACK2: [MSG, seqno=20], TP: [cluster_name=ISPN] 14:17:30,707 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received InfinispanNodeFailureTest-NodeA-7443#20 14:17:30,707 TRACE [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeA-7443#20-20 (1 messages) 14:17:30,707 TRACE [org.jgroups.protocols.pbcast.GMS] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received delta view [InfinispanNodeFailureTest-NodeA-7443|4], ref-view=[InfinispanNodeFailureTest-NodeA-7443|3], left=[InfinispanNodeFailureTest-NodeB-62629] 14:17:30,707 DEBUG [org.jgroups.protocols.pbcast.GMS] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: installing view [InfinispanNodeFailureTest-NodeA-7443|4] (1) [InfinispanNodeFailureTest-NodeA-7443] 14:17:30,707 TRACE [org.jgroups.protocols.UNICAST3] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: closing connections of non members [InfinispanNodeFailureTest-NodeB-62629, InfinispanNodeFailureTest-NodeC-7981] 14:17:30,707 DEBUG [org.jgroups.protocols.pbcast.NAKACK2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: removed InfinispanNodeFailureTest-NodeB-62629 from xmit_table (not member anymore) 14:17:30,707 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (112 bytes (560.00% of max_bundle_size) to 1 dests(s): [ISPN] 14:17:30,707 TRACE [org.jgroups.protocols.pbcast.STABLE] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: reset digest to InfinispanNodeFailureTest-NodeA-7443: [-1] 14:17:30,707 TRACE [org.jgroups.protocols.tom.TOA] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) Handle view [InfinispanNodeFailureTest-NodeA-7443|4] (1) [InfinispanNodeFailureTest-NodeA-7443] 14:17:30,708 TRACE [org.jgroups.protocols.MFC] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) new membership: [InfinispanNodeFailureTest-NodeA-7443] 14:17:30,708 TRACE [org.jgroups.protocols.FRAG2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: removed InfinispanNodeFailureTest-NodeB-62629 from fragmentation table 14:17:30,708 TRACE [org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) Added a new task directly: 0 task(s) are waiting 14:17:30,708 DEBUG [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) New view accepted: [InfinispanNodeFailureTest-NodeA-7443|4] (1) [InfinispanNodeFailureTest-NodeA-7443] 14:17:30,708 DEBUG [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) Joined: [], Left: [InfinispanNodeFailureTest-NodeB-62629] 14:17:30,708 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) ISPN000094: Received new cluster view for channel ISPN: [InfinispanNodeFailureTest-NodeA-7443|4] (1) [InfinispanNodeFailureTest-NodeA-7443] 14:17:30,708 INFO [org.infinispan.CLUSTER] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) ISPN100001: Node InfinispanNodeFailureTest-NodeB-62629 left the cluster 14:17:30,708 TRACE [org.infinispan.container.versioning.NumericVersionGenerator] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) Calculated rank based on view EventImpl{type=VIEW_CHANGED, newMembers=[InfinispanNodeFailureTest-NodeA-7443], oldMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], localAddress=InfinispanNodeFailureTest-NodeA-7443, viewId=4, subgroupsMerged=null, mergeView=false} and result was 1125904201809920 14:17:30,708 TRACE [org.infinispan.container.versioning.NumericVersionGenerator] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) Calculated rank based on view EventImpl{type=VIEW_CHANGED, newMembers=[InfinispanNodeFailureTest-NodeA-7443], oldMembers=[InfinispanNodeFailureTest-NodeA-7443, InfinispanNodeFailureTest-NodeB-62629], localAddress=InfinispanNodeFailureTest-NodeA-7443, viewId=4, subgroupsMerged=null, mergeView=false} and result was 1125904201809920 14:17:30,708 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-1,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: received [dst: InfinispanNodeFailureTest-NodeB-62629, src: InfinispanNodeFailureTest-NodeA-7443 (2 headers), size=0 bytes, flags=OOB|NO_RELIABILITY|INTERNAL], headers are GMS: GmsHeader[LEAVE_RSP], TP: [cluster_name=ISPN] 14:17:30,708 TRACE [org.jgroups.protocols.TCP_NIO2] (NioConnection.Reader [null],InfinispanNodeFailureTest-NodeB-62629) 127.0.0.1:7901: removed connection to 127.0.0.1:7900 14:17:30,708 TRACE [org.jgroups.protocols.pbcast.GMS] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: got LEAVE response from InfinispanNodeFailureTest-NodeA-7443 14:17:30,708 TRACE [org.jgroups.protocols.UNICAST3] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 --> DATA(InfinispanNodeFailureTest-NodeA-7443: #4, conn_id=0) 14:17:30,708 TRACE [org.jgroups.protocols.TCP_NIO2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeA-7443, headers are GMS: GmsHeader[VIEW_ACK], UNICAST3: DATA, seqno=4, TP: [cluster_name=ISPN] 14:17:30,708 TRACE [org.jgroups.protocols.TCP_NIO2] (Incoming-1,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: looping back message [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=0 bytes, flags=OOB|INTERNAL] 14:17:30,708 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629 --> ACK(InfinispanNodeFailureTest-NodeA-7443: #11) 14:17:30,708 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Received new cluster view: 4, isCoordinator = true, old status = COORDINATOR 14:17:30,708 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) Updating cluster members for all the caches. New list is [InfinispanNodeFailureTest-NodeA-7443] 14:17:30,708 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeB-62629: sending msg to InfinispanNodeFailureTest-NodeA-7443, src=InfinispanNodeFailureTest-NodeB-62629, headers are UNICAST3: ACK, seqno=11, conn_id=1, ts=3, TP: [cluster_name=ISPN] 14:17:30,708 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-InfinispanNodeFailureTest-NodeA-p4-t1) dests=null, command=CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=InfinispanNodeFailureTest-NodeA-7443, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=-1}, mode=SYNCHRONOUS, timeout=240000 14:17:30,708 DEBUG [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Stop discovery for InfinispanNodeFailureTest-NodeB-62629 14:17:30,708 DEBUG [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) closing sockets and stopping threads 14:17:30,708 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) InfinispanNodeFailureTest-NodeB-62629: sending 1 msgs (60 bytes (300.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeA-7443] 14:17:30,708 TRACE [org.jgroups.protocols.TCP_NIO2] (INT-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: received [dst: InfinispanNodeFailureTest-NodeA-7443, src: InfinispanNodeFailureTest-NodeA-7443 (3 headers), size=0 bytes, flags=OOB|INTERNAL], headers are GMS: GmsHeader[VIEW_ACK], UNICAST3: DATA, seqno=4, TP: [cluster_name=ISPN] 14:17:30,708 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) dest=127.0.0.1:7900 (63 bytes) 14:17:30,708 TRACE [org.jgroups.protocols.UNICAST3] (INT-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443 <-- DATA(InfinispanNodeFailureTest-NodeA-7443: #4, conn_id=0) 14:17:30,708 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeB-62629) 127.0.0.1:7901: server is not running, discarding message to 127.0.0.1:7900 14:17:30,709 TRACE [org.jgroups.protocols.UNICAST3] (INT-2,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: delivering InfinispanNodeFailureTest-NodeA-7443#4 14:17:30,709 TRACE [org.jgroups.protocols.pbcast.GMS] (ViewHandler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: got all ACKs (1) from members for view [InfinispanNodeFailureTest-NodeA-7443|4] 14:17:30,709 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) ISPN000082: Stopping the RpcDispatcher for channel ISPN 14:17:30,710 DEBUG [org.infinispan.manager.DefaultCacheManager] (testng-InfinispanNodeFailureTest) Stopping cache manager ISPN on InfinispanNodeFailureTest-NodeA-7443 14:17:30,710 TRACE [org.infinispan.manager.DefaultCacheManager] (testng-InfinispanNodeFailureTest) Cache stop order: [test_cache] 14:17:30,710 TRACE [org.infinispan.manager.DefaultCacheManager] (testng-InfinispanNodeFailureTest) Ignoring cache test_cache, it is already terminated. 14:17:30,710 TRACE [org.infinispan.jmx.ComponentsJmxRegistration] (testng-InfinispanNodeFailureTest) Unregistering jmx resources.. 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String cancelPushState 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String takeSiteOffline 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String pushState 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation java.lang.String bringSiteOnline 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute coordinatorAddress [r=true,w=false,is=false,type=java.lang.String] 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute cacheManagerStatus [r=true,w=false,is=false,type=java.lang.String] 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute runningCacheCount [r=true,w=false,is=false,type=java.lang.String] 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute globalConfigurationAsProperties [r=true,w=false,is=false,type=java.util.Properties] 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute name [r=true,w=false,is=false,type=java.lang.String] 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute definedCacheNames [r=true,w=false,is=false,type=java.lang.String] 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterMembers [r=true,w=false,is=false,type=java.lang.String] 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute coordinator [r=true,w=false,is=true,type=boolean] 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute nodeAddress [r=true,w=false,is=false,type=java.lang.String] 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute definedCacheConfigurationNames [r=true,w=false,is=false,type=java.lang.String] 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute definedCacheCount [r=true,w=false,is=false,type=java.lang.String] 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute version [r=true,w=false,is=false,type=java.lang.String] 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterName [r=true,w=false,is=false,type=java.lang.String] 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute createdCacheCount [r=true,w=false,is=false,type=java.lang.String] 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterSize [r=true,w=false,is=false,type=int] 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute physicalAddresses [r=true,w=false,is=false,type=java.lang.String] 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void startCache 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Operation void startCache 14:17:30,710 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute rebalancingEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,711 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute clusterAvailability [r=true,w=false,is=false,type=java.lang.String] 14:17:30,711 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute stores [r=true,w=false,is=false,type=long] 14:17:30,711 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageReadTime [r=true,w=false,is=false,type=long] 14:17:30,711 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hits [r=true,w=false,is=false,type=long] 14:17:30,711 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageWriteTime [r=true,w=false,is=false,type=long] 14:17:30,711 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute hitRatio [r=true,w=false,is=false,type=double] 14:17:30,711 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeHits [r=true,w=false,is=false,type=long] 14:17:30,711 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute removeMisses [r=true,w=false,is=false,type=long] 14:17:30,711 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute readWriteRatio [r=true,w=false,is=false,type=double] 14:17:30,711 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute numberOfEntries [r=true,w=false,is=false,type=int] 14:17:30,711 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute misses [r=true,w=false,is=false,type=long] 14:17:30,711 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute statisticsEnabled [r=true,w=true,is=true,type=boolean] 14:17:30,711 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute timeSinceReset [r=true,w=false,is=false,type=long] 14:17:30,711 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute averageRemoveTime [r=true,w=false,is=false,type=long] 14:17:30,711 TRACE [org.infinispan.jmx.ResourceDMBean] (testng-InfinispanNodeFailureTest) Attribute evictions [r=true,w=false,is=false,type=long] 14:17:30,711 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=CacheManager,name="DefaultCacheManager",component=GlobalXSiteAdminOperations 14:17:30,711 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=CacheManager,name="DefaultCacheManager",component=CacheManager 14:17:30,711 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=CacheManager,name="DefaultCacheManager",component=LocalTopologyManager 14:17:30,711 TRACE [org.infinispan.jmx.JmxUtil] (testng-InfinispanNodeFailureTest) Unregistered infinispan-6599bb76-2f69-466c-b5a1-6944a8e653a1:type=CacheManager,name="DefaultCacheManager",component=CacheContainerStats 14:17:30,711 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (testng-InfinispanNodeFailureTest) Stopping LocalTopologyManager on InfinispanNodeFailureTest-NodeA-7443 14:17:30,711 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) ISPN000080: Disconnecting JGroups channel ISPN 14:17:30,712 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443 --> ACK(InfinispanNodeFailureTest-NodeB-62629: #20) 14:17:30,712 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeB-62629, src=InfinispanNodeFailureTest-NodeA-7443, headers are UNICAST3: ACK, seqno=20, ts=3, TP: [cluster_name=ISPN] 14:17:30,712 TRACE [org.jgroups.protocols.UNICAST3] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443 --> ACK(InfinispanNodeFailureTest-NodeC-7981: #12) 14:17:30,712 TRACE [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) InfinispanNodeFailureTest-NodeA-7443: sending msg to InfinispanNodeFailureTest-NodeC-7981, src=InfinispanNodeFailureTest-NodeA-7443, headers are UNICAST3: ACK, seqno=12, ts=4, TP: [cluster_name=ISPN] 14:17:30,712 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (60 bytes (300.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeB-62629] 14:17:30,712 DEBUG [org.infinispan.test.fwk.TEST_PING] (testng-InfinispanNodeFailureTest) Stop discovery for InfinispanNodeFailureTest-NodeA-7443 14:17:30,712 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7901 (63 bytes) 14:17:30,712 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) 127.0.0.1:7900: server is not running, discarding message to 127.0.0.1:7901 14:17:30,712 DEBUG [org.jgroups.protocols.TCP_NIO2] (testng-InfinispanNodeFailureTest) closing sockets and stopping threads 14:17:30,712 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) InfinispanNodeFailureTest-NodeA-7443: sending 1 msgs (60 bytes (300.00% of max_bundle_size) to 1 dests(s): [ISPN:InfinispanNodeFailureTest-NodeC-7981] 14:17:30,712 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) dest=127.0.0.1:7902 (63 bytes) 14:17:30,712 TRACE [org.jgroups.protocols.TCP_NIO2] (TransferQueueBundler,InfinispanNodeFailureTest-NodeA-7443) 127.0.0.1:7900: server is not running, discarding message to 127.0.0.1:7902 14:17:30,713 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (testng-InfinispanNodeFailureTest) ISPN000082: Stopping the RpcDispatcher for channel ISPN 14:17:30,713 DEBUG [org.infinispan.commons.test.TestNGTestListener] (testng-InfinispanNodeFailureTest) After setup destroy 14:17:30,713 DEBUG [org.infinispan.commons.test.TestNGTestListener] (testng-InfinispanNodeFailureTest) Before setup testClassFinished 14:17:30,714 DEBUG [org.infinispan.commons.test.TestNGTestListener] (testng-InfinispanNodeFailureTest) After setup testClassFinished 14:17:30,715 WARN [org.infinispan.commons.test.TestNGTestListener] (main) Possible leaked threads at the end of the test suite: 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) "ForkJoinPool.commonPool-worker-1" #1 daemon prio=5 tid=0x16 nid=NA waiting 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) java.lang.Thread.State: WAITING 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) sun.misc.Unsafe.park(Native Method) 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) "ForkJoinPool.commonPool-worker-2" #2 daemon prio=5 tid=0x26 nid=NA timed_waiting 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) java.lang.Thread.State: TIMED_WAITING 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) sun.misc.Unsafe.park(Native Method) 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) Possible leaked threads at the end of the test suite: 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) "ForkJoinPool.commonPool-worker-1" #1 daemon prio=5 tid=0x16 nid=NA waiting 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) java.lang.Thread.State: WAITING 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) sun.misc.Unsafe.park(Native Method) 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) "ForkJoinPool.commonPool-worker-2" #2 daemon prio=5 tid=0x26 nid=NA timed_waiting 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) java.lang.Thread.State: TIMED_WAITING 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) sun.misc.Unsafe.park(Native Method) 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) 14:17:30,716 WARN [org.infinispan.commons.test.TestNGTestListener] (main) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.738 sec <<< FAILURE! - in org.infinispan.tx.InfinispanNodeFailureTest killedNodeDoesNotBreakReplaceCommand(org.infinispan.tx.InfinispanNodeFailureTest) Time elapsed: 0.438 sec <<< FAILURE! java.lang.AssertionError: expected: but was: at org.testng.AssertJUnit.fail(AssertJUnit.java:59) at org.testng.AssertJUnit.failNotEquals(AssertJUnit.java:364) at org.testng.AssertJUnit.assertEquals(AssertJUnit.java:80) at org.testng.AssertJUnit.assertEquals(AssertJUnit.java:185) at org.testng.AssertJUnit.assertEquals(AssertJUnit.java:192) at org.infinispan.tx.InfinispanNodeFailureTest.killedNodeDoesNotBreakReplaceCommand(InfinispanNodeFailureTest.java:135) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84) at org.testng.internal.Invoker.invokeMethod(Invoker.java:714) at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901) at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231) at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127) at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111) at org.testng.TestRunner.privateRun(TestRunner.java:767) at org.testng.TestRunner.run(TestRunner.java:617) at org.testng.SuiteRunner.runTest(SuiteRunner.java:348) at org.testng.SuiteRunner.access$000(SuiteRunner.java:38) at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382) at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Results : Failed tests: InfinispanNodeFailureTest.killedNodeDoesNotBreakReplaceCommand:135 expected: but was: Tests run: 1, Failures: 1, Errors: 0, Skipped: 0 [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 5.927 s [INFO] Finished at: 2016-10-26T14:17:31+02:00 [INFO] Final Memory: 35M/1019M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test (default-cli) on project infinispan-core: There are test failures. [ERROR] [ERROR] Please refer to /home/rvansa/workspace/ispn/infinispan/core/target/surefire-reports for the individual test results. [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException