Details
-
Bug
-
Resolution: Obsolete
-
Minor
-
None
-
7.1.0.ER2
-
None
Description
When worker(EAP) tries to unregister (reload, shutdown, ...) same app twice
2017-07-24 14:50:56,393 INFO [io.undertow] (default task-25) UT005047: Unregistering context /, from node jboss-eap-7.1-1 2017-07-24 14:50:56,439 INFO [io.undertow] (default task-28) UT005047: Unregistering context /, from node jboss-eap-7.1-1
then undertow balancer correctly throws
2017-07-24 14:50:56,441 ERROR [io.undertow] (default task-28) UT005043: Error in processing MCMP commands: Type:MEM, Mess: MEM: Can't update or insert context
as the context is already removed. However, this probably completelly destroys any futher app unregistering and therefore whole node removal is case of shutdown.
[standalone@192.168.122.206:9990 /] shutdown //jboss-eap-7.1-1
Should result in complete node removal from balancer console, but
"balancer" => {"mycluster" => { "max-attempts" => 1, "sticky-session" => true, "sticky-session-cookie" => "JSESSIONID", "sticky-session-force" => false, "sticky-session-path" => undefined, "sticky-session-remove" => false, "wait-worker" => 0, "load-balancing-group" => undefined, "node" => {"jboss-eap-7.1-1" => { "aliases" => [ "default-host", "localhost" ], "cache-connections" => 40, "elected" => 0, "flush-packets" => false, "load" => -1, "load-balancing-group" => undefined, "max-connections" => 40, "open-connections" => 0, "ping" => 10, "queue-new-requests" => true, "read" => 0L, "request-queue-size" => 1000, "status" => "NODE_DOWN", "timeout" => 0, "ttl" => 60L, "uri" => "ajp://192.168.122.206:8009/?#", "written" => 0L, "context" => {"/clusterbench" => { "requests" => 0, "status" => "disabled" }} }} }}
due to
2017-07-24 15:12:09,573 ERROR [io.undertow] (default task-5) UT005043: Error in processing MCMP commands: Type:MEM, Mess: MEM: Can't update or insert context
the unregistering sequance is broken and node is removed later due to the unresponsiveness.
Result:
Removing the same context twice should print an error message and continue to process another MCMP messages.
Reproducing
- Take EAP 7.1.0.ER2, due to
JBEAP-12298 - Set up undertow balancer with one worker
- Deploy a custom root app to the worker
//jboss-web.xml <jboss-web> <context-root>/</context-root> </jboss-web>
- Turn off the worker and watch the balancer console
/subsystem=undertow/configuration=filter/mod-cluster=modcluster:read-resource(include-runtime=true,recursive=true
- There should be clearly visible that worker is still available but should be removed
"balancer" => {"mycluster" => { ... "node" => {"jboss-eap-7.1-1" => { "aliases" => [ "default-host", "localhost" ], "cache-connections" => 40, "elected" => 0, "flush-packets" => false, "load" => -1, "load-balancing-group" => undefined, "max-connections" => 40, "open-connections" => 0, "ping" => 10, "queue-new-requests" => true, "read" => 0L, "request-queue-size" => 1000, "status" => "NODE_DOWN", "timeout" => 0, "ttl" => 60L, "uri" => "ajp://192.168.122.206:8009/?#", "written" => 0L, "context" => {"/clusterbench" => { "requests" => 0, "status" => "disabled" }}
- It depends on the order in what the deployements are removed. In example case clusterbench was about to be removed after root so we can still see it after shutdown. Node(Worker) will be visible on the console for max 10s by default(PING/PONG).
Attachments
Issue Links
- relates to
-
JBEAP-12298 Custom root app is causing issues when root location is still on
- Closed