-
Bug
-
Resolution: Done
-
Minor
-
EAP_EWP 5.1.0
-
None
-
mod_cluster 1.0.4.GA
EAP 5.0.1.GA
-
Release Notes
-
The status page of the mod_cluster manager was not updated upon failover, so worker nodes were listed as active and available after they had failed. The status page now updates when nodes fail.
-
Documented as Resolved Issue
The mod_cluster-manager status page does not get updated on failover, the node still shows up as active and available
balancer: [1] Name: mycluster Sticky: 1 [JSESSIONID]/[jsessionid] remove: 0 force: 0 Timeout: 0 Maxtry: 1
node: [1:1],Balancer: mycluster,JVMRoute: node01,Domain: [],Host: 10.0.242.204,Port: 8009,Type: ajp,flushpackets: 0,flushwait: 10,ping: 10,smax: 1,ttl: 60,timeout: 0
node: [2:2],Balancer: mycluster,JVMRoute: node02,Domain: [],Host: 10.0.242.205,Port: 8009,Type: ajp,flushpackets: 0,flushwait: 10,ping: 10,smax: 1,ttl: 60,timeout: 0
host: 1 [localhost] vhost: 1 node: 1
host: 2 [localhost] vhost: 1 node: 2
context: 1 [/load-demo] vhost: 1 node: 1 status: 1
context: 2 [/thespike] vhost: 1 node: 1 status: 1
context: 3 [/load-demo] vhost: 1 node: 2 status: 1
context: 4 [/thespike] vhost: 1 node: 2 status: 1
This behaviour is already fixed in mod_cluster 1.1.x, just needs to be backported to 1.0.x and included in the EAP.
- is related to
-
MODCLUSTER-195 mod_cluster-manager does not update when a node is taken down
- Closed