-
Bug
-
Resolution: Unresolved
-
Optional
-
None
-
8.0.0.GA
-
False
-
None
-
False
-
-
-
-
-
-
-
At initial clustered deployment (scale from 0 to 2), the following warning occured at startup in the logs.
[0m[33m15:12:10,374 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (thread-5,null,hsc-1-6g95k) ISPN000329: Unable to read rebalancing status from coordinator hsc-1-m66zw: java.util.concurrent.CompletionException: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node hsc-1-m66zw was suspected at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:332) at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:347) at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:636) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162) at org.infinispan.core@14.0.17.Final-redhat-00002//org.infinispan.remoting.transport.AbstractRequest.completeExceptionally(AbstractRequest.java:75) at org.infinispan.core@14.0.17.Final-redhat-00002//org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:49) at org.infinispan.core@14.0.17.Final-redhat-00002//org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:51) at org.infinispan.core@14.0.17.Final-redhat-00002//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1579) at org.infinispan.core@14.0.17.Final-redhat-00002//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1479) at org.infinispan.core@14.0.17.Final-redhat-00002//org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1681) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.JChannel.up(JChannel.java:733) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:131) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.stack.Protocol.up(Protocol.java:340) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.FORK.up(FORK.java:145) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.FRAG2.up(FRAG2.java:139) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.FlowControl.up(FlowControl.java:253) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.FlowControl.up(FlowControl.java:261) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.pbcast.GMS.up(GMS.java:845) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.AUTH.up(AUTH.java:119) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:226) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1083) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.UNICAST3.addMessage(UNICAST3.java:822) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:804) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.UNICAST3.up(UNICAST3.java:453) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:680) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.Encrypt.handleEncryptedMessage(Encrypt.java:272) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.Encrypt.up(Encrypt.java:167) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.VERIFY_SUSPECT2.up(VERIFY_SUSPECT2.java:105) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.FailureDetection.up(FailureDetection.java:180) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.FD_SOCK2.up(FD_SOCK2.java:188) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.MERGE3.up(MERGE3.java:274) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.stack.Protocol.up(Protocol.java:340) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.Discovery.up(Discovery.java:294) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.protocols.TP.passMessageUp(TP.java:1184) at org.jgroups@5.2.18.Final-redhat-00001//org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:107) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at org.wildfly.clustering.context@8.0.0.GA-redhat-00011//org.wildfly.clustering.context.ContextReferenceExecutor.execute(ContextReferenceExecutor.java:49) at org.wildfly.clustering.context@8.0.0.GA-redhat-00011//org.wildfly.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:78) at java.base/java.lang.Thread.run(Thread.java:840) Caused by: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node hsc-1-m66zw was suspected at org.infinispan.core@14.0.17.Final-redhat-00002//org.infinispan.remoting.transport.ResponseCollectors.remoteNodeSuspected(ResponseCollectors.java:31) at org.infinispan.core@14.0.17.Final-redhat-00002//org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:31) at org.infinispan.core@14.0.17.Final-redhat-00002//org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:17) at org.infinispan.core@14.0.17.Final-redhat-00002//org.infinispan.remoting.transport.ValidSingleResponseCollector.addResponse(ValidSingleResponseCollector.java:23) at org.infinispan.core@14.0.17.Final-redhat-00002//org.infinispan.remoting.transport.impl.SingleTargetRequest.addResponse(SingleTargetRequest.java:75) at org.infinispan.core@14.0.17.Final-redhat-00002//org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:45) ... 34 more
I know that scaling up from 0 to 2 in DeploymentConfig can have issue of 2 coordinators. But does not look like that case. Logs does not incdicate coordinators competition.
[0m[0m15:12:09,850 INFO [org.jboss.as.clustering.jgroups] (ServerService Thread Pool -- 78) WFLYCLJG0033: Connected 'ee' channel. 'hsc-1-m66zw' joined cluster 'ee' with view: [hsc-1-m66zw|0] (1) [hsc-1-m66zw] [0m[0m15:12:09,945 INFO [org.jboss.as.clustering.jgroups] (ServerService Thread Pool -- 78) WFLYCLJG0033: Connected 'ee' channel. 'hsc-1-6g95k' joined cluster 'ee' with view: [hsc-1-m66zw|1] (2) [hsc-1-m66zw, hsc-1-6g95k]
The cluster form correctly despite the warning in the logs. And all tests are passing as well correctly.
With CR3 there pop up this exception for the first what I am looking into EAP8 results. When it happened it occured with openshift.KUBE_PING. I do not know if it is so rare, that it was coincidence or since CR3 we will see this more often. I will observe it.
I found this past issue https://issues.redhat.com/browse/CLOUD-3047, which seems describing same situation.