Description
When server starts with full profile, the heap dump shows that the biggest consumer object is io.netty.buffer.PoolArena$HeapArena.
bin/standalone.sh -c standalone-full.xml
it becomes worse if we enable buffer-pooling to in-vm-acceptor and in-vm-connector:
/subsystem=messaging-activemq/server=default/in-vm-acceptor=in-vm:write-attribute(name=params.buffer-pooling, value=true) /subsystem=messaging-activemq/server=default/in-vm-connector=in-vm:write-attribute(name=params.buffer-pooling, value=true) :shutdown(restart=true)
We had this similar issue tracked by:
JBEAP-6731- this gets closed as not-active responseJBEAP-8911- adding a buffer-pooling parameter defaults to false, this is not the best way to solve the problemJBEAP-23872- a test OOME in a constraint environment with low memory and high CPU
After talking with fnigro:
InVMConnection doesn't ever use off-heap pooled buffer right now.
It can use pooled heap buffers, and cause OOM because it's allocating lot of byte[] arenas.
If it won't use pooled heap buffers, no OOM, but a huge amount of temporary allocations (that's better, but still suboptimal perf-wise).The solution would be to allow off-heap pooled buffers: they won't pollute the heap AND, given that are pooled, they will be reused and shared with the Netty pool within the AMQ broker.
which means with buffer-polling set to false now, there are a huge amount of temporary allocations, we need to fix that.
Attachments
Issue Links
- is cloned by
-
ENTMQBR-7393 io.netty.buffer.PoolArena$HeapArena consumes much heap memory with full profile in EAP server
- Backlog