Uploaded image for project: 'JBoss Enterprise Application Platform'
  1. JBoss Enterprise Application Platform
  2. JBEAP-4203

Artemis client thread pool has inappropriete sizing policy for MDB

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Blocker Blocker
    • 7.0.0.CR2
    • 7.0.0.CR1
    • ActiveMQ
    • None
    • Hide

      Steps to reproduce the issue - 100% reproducer:

      git clone git://git.app.eng.bos.redhat.com/jbossqe/eap-tests-hornetq.git
      cd eap-tests-hornetq/scripts/
      git checkout refactoring_modules
      groovy -DEAP_VERSION=7.0.0.CR1 PrepareServers7.groovy
      export WORKSPACE=$PWD
      export JBOSS_HOME_1=$WORKSPACE/server1/jboss-eap
      export JBOSS_HOME_2=$WORKSPACE/server2/jboss-eap
      export JBOSS_HOME_3=$WORKSPACE/server3/jboss-eap
      export JBOSS_HOME_4=$WORKSPACE/server4/jboss-eap
      cd ../jboss-hornetq-testsuite/
      mvn clean test -Dtest=Lodh1TestCase#testLimitedPoolSize  -DfailIfNoTests=false -Deap=7x   | tee log
      
      Show
      Steps to reproduce the issue - 100% reproducer: git clone git: //git.app.eng.bos.redhat.com/jbossqe/eap-tests-hornetq.git cd eap-tests-hornetq/scripts/ git checkout refactoring_modules groovy -DEAP_VERSION=7.0.0.CR1 PrepareServers7.groovy export WORKSPACE=$PWD export JBOSS_HOME_1=$WORKSPACE/server1/jboss-eap export JBOSS_HOME_2=$WORKSPACE/server2/jboss-eap export JBOSS_HOME_3=$WORKSPACE/server3/jboss-eap export JBOSS_HOME_4=$WORKSPACE/server4/jboss-eap cd ../jboss-hornetq-testsuite/ mvn clean test -Dtest=Lodh1TestCase#testLimitedPoolSize -DfailIfNoTests= false -Deap=7x | tee log

      There is drawback caused by limiting Artemis client thread pool in one of the LODH scenarios. As number of client threads was limited to 8 * (number of cpu cores) in [1] only 3 MDBs can be deployed to EAP 7 server with 4 CPU cores.

      Test scenario:

      • start EAP 7 with deployed queues InQueue and OutQueue
      • deploy 20 MDBs which consumes messages from InQueue and resends to OutQueue in XA transaction (in-vm connector is used)
      • send 10 000 messages to InQueue
      • wait for MDBs to process all messages
      • receive messages from OutQueue

      Expected results: No messages are lost/duplicated.

      Actual result: MDB cannot process messages as all threads from thread pool are exhausted. Most of the client threads are waiting in:

      "Thread-31 (ActiveMQ-client-global-threads-293753142)" #228 daemon prio=5 os_prio=0 tid=0x00007f306010e800 nid=0x7b3e waiting on condition [0x00007f2fe56aa000]
         java.lang.Thread.State: TIMED_WAITING (parking)
      	at sun.misc.Unsafe.park(Native Method)
      	- parking to wait for  <0x00000000d05204d8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
      	at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
      	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
      	at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:372)
      	- locked <0x00000000d0520518> (a java.lang.Object)
      	at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:303)
      	at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.configureTransactionTimeout(ActiveMQSessionContext.java:509)
      	at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.setTransactionTimeout(ClientSessionImpl.java:1175)
      	at org.apache.activemq.artemis.service.extensions.xa.ActiveMQXAResourceWrapperImpl.setTransactionTimeout(ActiveMQXAResourceWrapperImpl.java:116)
      	at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.enlistResource(TransactionImple.java:637)
      	at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.enlistResource(TransactionImple.java:423)
      	at org.jboss.as.ejb3.inflow.MessageEndpointInvocationHandler.beforeDelivery(MessageEndpointInvocationHandler.java:109)
      	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:497)
      	at org.jboss.as.ejb3.inflow.AbstractInvocationHandler.handle(AbstractInvocationHandler.java:60)
      	at org.jboss.as.ejb3.inflow.MessageEndpointInvocationHandler.doInvoke(MessageEndpointInvocationHandler.java:135)
      	at org.jboss.as.ejb3.inflow.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:73)
      	at org.jboss.qa.hornetq.apps.mdb.LocalMdbFromQueueWithSecurity$$$endpoint2.beforeDelivery(Unknown Source)
      	at org.apache.activemq.artemis.ra.inflow.ActiveMQMessageHandler.onMessage(ActiveMQMessageHandler.java:300)
      	at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:932)
      	at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.access$400(ClientConsumerImpl.java:47)
      	at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl$Runner.run(ClientConsumerImpl.java:1045)
      	at org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$ExecutorTask.run(OrderedExecutorFactory.java:100)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      	at java.lang.Thread.run(Thread.java:745)
      

      Customer impact: By default only 3 MDBs can work in parallel on 4 cpu core machine. Once thread pool is exhausted then no message is possible to process.

      Workaround: Increase size of the client thread pool by setting system property: -Dactivemq.artemis.client.global.thread.pool.max.size=...

      [1] https://issues.jboss.org/browse/JBEAP-2947

        1. thread-dump-fail.txt
          344 kB
        2. server.log
          1.30 MB

              mtaylor1@redhat.com Martyn Taylor (Inactive)
              mnovak1@redhat.com Miroslav Novak
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: