-
Bug
-
Resolution: Done
-
Major
-
1.3.22.Final
-
None
-
None
I had a slow consumer on a websocket connection.
The consumer's TCP buffer was full, and the connection was advertising a zero window. Attempts to flush data would result in zero bytes written.
Every time my application would send a new message, it would call flush, which results in a call to AbstractFramedChannel.flushSenders. As part of its work, it constructs the frame header, which allocates a new buffer from the buffer pool.
Following the defaults in the io.undertow.Undertow class, my pool is of 4k direct buffers, no maximum size.
The slow consumer will cause new allocations of 4k from the pool in order to satisfy the < 32byte header construction.
This eventually exhausted all available direct buffer memory until the slow consumer disconnected.
One simple patch might be for AbstractFramedChannel.flushSenders to recognize that it is waiting for writes to be ready and to leave new frames in the newFrames collection. A cap on the number of frames to flush at once might also be useful (if it were possible to cap it from a size standpoint so more isn't prepped than necessary)