There is drawback caused by limiting Artemis client thread pool in one of the LODH scenarios. As number of client threads was limited to 8 * (number of cpu cores) in [1] only 3 MDBs can be deployed to EAP 7 server with 4 CPU cores.
Test scenario:
- start EAP 7 with deployed queues InQueue and OutQueue
- deploy 20 MDBs which consumes messages from InQueue and resends to OutQueue in XA transaction (in-vm connector is used)
- send 10 000 messages to InQueue
- wait for MDBs to process all messages
- receive messages from OutQueue
Expected results: No messages are lost/duplicated.
Actual result: MDB cannot process messages as all threads from thread pool are exhausted. Most of the client threads are waiting in:
"Thread-31 (ActiveMQ-client-global-threads-293753142)" #228 daemon prio=5 os_prio=0 tid=0x00007f306010e800 nid=0x7b3e waiting on condition [0x00007f2fe56aa000] java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x00000000d05204d8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:372) - locked <0x00000000d0520518> (a java.lang.Object) at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:303) at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.configureTransactionTimeout(ActiveMQSessionContext.java:509) at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.setTransactionTimeout(ClientSessionImpl.java:1175) at org.apache.activemq.artemis.service.extensions.xa.ActiveMQXAResourceWrapperImpl.setTransactionTimeout(ActiveMQXAResourceWrapperImpl.java:116) at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.enlistResource(TransactionImple.java:637) at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.enlistResource(TransactionImple.java:423) at org.jboss.as.ejb3.inflow.MessageEndpointInvocationHandler.beforeDelivery(MessageEndpointInvocationHandler.java:109) at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.jboss.as.ejb3.inflow.AbstractInvocationHandler.handle(AbstractInvocationHandler.java:60) at org.jboss.as.ejb3.inflow.MessageEndpointInvocationHandler.doInvoke(MessageEndpointInvocationHandler.java:135) at org.jboss.as.ejb3.inflow.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:73) at org.jboss.qa.hornetq.apps.mdb.LocalMdbFromQueueWithSecurity$$$endpoint2.beforeDelivery(Unknown Source) at org.apache.activemq.artemis.ra.inflow.ActiveMQMessageHandler.onMessage(ActiveMQMessageHandler.java:300) at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:932) at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.access$400(ClientConsumerImpl.java:47) at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl$Runner.run(ClientConsumerImpl.java:1045) at org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$ExecutorTask.run(OrderedExecutorFactory.java:100) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
Customer impact: By default only 3 MDBs can work in parallel on 4 cpu core machine. Once thread pool is exhausted then no message is possible to process.
Workaround: Increase size of the client thread pool by setting system property: -Dactivemq.artemis.client.global.thread.pool.max.size=...
- is blocked by
-
JBEAP-4223 Upgrade Artemis 1.1.0.wildfly-016
- Closed
- relates to
-
JBEAP-2947 OutOfMemory at HP_UX java during CDI TCK execution
- Closed