-
Bug
-
Resolution: Done
-
Major
-
JDG 7.2 ER3
-
None
This bug is quite odd, it was identified only on JPA on Oracle DBs (but is consistent there). The cluster is supposed to consist of two nodes, however
server1.getDefaultCacheManager().getClusterSize()
reports three. Furthermore, server startup contains these entries (in the order of appearance, starting two servers, one after another):
[java] [0m[0m05:34:29,499 INFO [org.jboss.as] (MSC service thread 1-2) WFLYSRV0049: Data Grid 7.2.0 (WildFly Core 2.1.18.Final-redhat-1) starting
[java] [0m[0m05:34:32,665 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-3) ISPN000094: Received new cluster view for channel default: [node0|9] (2) [node0, node0]
[java] [0m[0m05:34:34,699 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: Data Grid 7.2.0 (WildFly Core 2.1.18.Final-redhat-1) started in 5824ms - Started 180 of 253 services (138 services are lazy, passive or on-demand)
[java] [0m[0m05:34:35,855 INFO [org.jboss.as] (MSC service thread 1-2) WFLYSRV0049: Data Grid 7.2.0 (WildFly Core 2.1.18.Final-redhat-1) starting
[java] [0m[0m05:34:39,227 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (jgroups-12,node0) ISPN000094: Received new cluster view for channel default: [node0|10] (3) [node0, node0, node1]
[java] [0m[0m05:34:41,327 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: Data Grid 7.2.0 (WildFly Core 2.1.18.Final-redhat-1) started in 6497ms - Started 180 of 253 services (138 services are lazy, passive or on-demand)
[java] [0m[0m05:34:41,835 INFO [org.jboss.as.server] (Management Triggered Shutdown) WFLYSRV0241: Shutting down in response to management operation 'shutdown'
[java] [0m[0m05:34:42,027 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (jgroups-6,node1) ISPN000094: Received new cluster view for channel default: [node0|11] (2) [node0, node1]
[java] [0m[0m05:34:42,051 INFO [org.jboss.as] (MSC service thread 1-1) WFLYSRV0050: Data Grid 7.2.0 (WildFly Core 2.1.18.Final-redhat-1) stopped in 194ms
..shutdown of another server.
The cluster seems to contain one "phantom node".