We have noticed on our production servers an abnormal CPU usage which happens ramdomly after few minutes to hours of normal run. The only way to recover from this situation is to restart the server. Even after disconnecting all the clients connected to the server, the CPU is still high.
I took a thread dump before restarting the server. It seems that a thread is running in an infinite loop:
"proxy-xnio I/O-3" #31 prio=5 os_prio=0 tid=0x00007ff8b93c0800 nid=0x67b4 runnable [0x00007ff8941ad000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.NativeThread.current(Native Method) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:468) - locked <0x00000000840382e0> (a java.lang.Object) - locked <0x00000000840382d0> (a java.lang.Object) at org.xnio.nio.NioSocketConduit.write(NioSocketConduit.java:152) at io.undertow.server.protocol.http.HttpResponseConduit.write(HttpResponseConduit.java:599) at org.xnio.conduits.AbstractStreamSinkConduit.write(AbstractStreamSinkConduit.java:51) at org.xnio.conduits.ConduitStreamSinkChannel.write(ConduitStreamSinkChannel.java:150) at io.undertow.channels.DetachableStreamSinkChannel.write(DetachableStreamSinkChannel.java:240) at io.undertow.server.HttpServerExchange$WriteDispatchChannel.write(HttpServerExchange.java:2004) at io.undertow.server.handlers.sse.ServerSentEventConnection$SseWriteListener.handleEvent(ServerSentEventConnection.java:510) - locked <0x0000000084037eb0> (a io.undertow.server.handlers.sse.ServerSentEventConnection) at io.undertow.server.handlers.sse.ServerSentEventConnection$SseWriteListener.handleEvent(ServerSentEventConnection.java:475) at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92) at io.undertow.channels.DetachableStreamSinkChannel$SetterDelegatingListener.handleEvent(DetachableStreamSinkChannel.java:285) at io.undertow.channels.DetachableStreamSinkChannel$SetterDelegatingListener.handleEvent(DetachableStreamSinkChannel.java:272) at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92) at org.xnio.conduits.WriteReadyHandler$ChannelListenerHandler.writeReady(WriteReadyHandler.java:65) at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:93) at org.xnio.nio.WorkerThread.run(WorkerThread.java:559)
I tried to reproduce it on my local test bed with few clients connected. It took hours but I finally managed to do it.
After debugging it appears that :
In DetachableStreamSinkChannel.java line 240
@Override public int write(final ByteBuffer src) throws IOException { if (isFinished()) { throw UndertowMessages.MESSAGES.channelIsClosed(); } return delegate.write(src); }
delegate.write(src) returns 0 which creates an infinite loop in ServerSentEventConnection$SseWriteListener.handleEvent()
The input src byteBuffer when this issue occurs is in the following state (given by intellij):
delegate.write(src) src: java.nio.DirectByteBuffer[pos=0 lim=0 cap=16384]
After spending a lot of time on google to find out a solution to this issue, I found this undertow issue (https://issues.jboss.org/browse/UNDERTOW-282) which seems very similar to mine but occurring on AbstractStreamSourceConduit.read() instead of AbstractStreamSinkConduit.write() (which is normal since we are using an SSE connection to send data to clients)
I tried the workaround mentioned in this issue which consists to detect when the "write" returns 0 consecutively and close the underlying connection by registering a custom ResponseWrapper in the HttpExchange of my handler.
It works great as a workaround.
I think that the fix applied for the read() on ISSUE UNDERTOW-282 should be also applied to the write()
- clones
-
UNDERTOW-712 Randomly ServerSentEventConnection$SseWriteListener.handleEvent goes into an infinite loop
- Resolved
- is incorporated by
-
JBEAP-5060 Upgrade Undertow from 1.3.22.Final to 1.3.23.Final
- Closed