Uploaded image for project: 'WildFly'
  1. WildFly
  2. WFLY-9524

Messaging - default max client threads

    XMLWordPrintable

Details

    • Enhancement
    • Resolution: Unresolved
    • Major
    • None
    • 11.0.0.Final
    • JMS
    • None

    Description

      By default, size of client thread pool is configured as 8 * number of CPU cores.

      On 1 CPU machine, with default configuration and MDB deployed on server, resource starvation is possible
      On one CPU there are by default

      • 8 ActiveMQ client threads (8 * CPU count)
      • 4 MDB instances (mdb-strict-max-pool = 4 * CPU count)
      • 15 JCA RA sessions consuming messages from queue (default value of maxSession)

      Consuming of messages by MDB gets stuck because all 8 client threads are awaiting large message completion and there is no other thread to handle other tasks.

      Client thread waiting for large message completion
      "Thread-7 (ActiveMQ-client-global-threads)" #475 daemon prio=5 os_prio=0
      tid=0x000000000590f000 nid=0x7c7a in Object.wait() [0x00007fda43413000]
            java.lang.Thread.State: TIMED_WAITING (on object monitor)
                 at java.lang.Object.wait(Native Method)
                 at
                 org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.waitCompletion(LargeMessageControllerImpl.java:302)
                 - locked <0x00000000b0083e88> (a
                 org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
                 at
                 org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.saveBuffer(LargeMessageControllerImpl.java:276)
                 - locked <0x00000000b0083e88> (a
                 org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
                 at
                 org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.checkBuffer(ClientLargeMessageImpl.java:159)
                 at
                 org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.checkCompletion(ClientLargeMessageImpl.java:84)
                 at
                 org.apache.activemq.artemis.jms.client.ActiveMQMessage.doBeforeReceive(ActiveMQMessage.java:786)
                 at
                 org.apache.activemq.artemis.jms.client.ActiveMQTextMessage.doBeforeReceive(ActiveMQTextMessage.java:110)
                 at
                 org.apache.activemq.artemis.ra.inflow.ActiveMQMessageHandler.onMessage(ActiveMQMessageHandler.java:295)
                 at
                 org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:1001)
                 at
                 org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.access$400(ClientConsumerImpl.java:49)
                 at
                 org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl$Runner.run(ClientConsumerImpl.java:1124)
                 at
                 org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$ExecutorTask.run(OrderedExecutorFactory.java:122)
                 at
                 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
                 at
                 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
                 at java.lang.Thread.run(Thread.java:748)
      

      This can be fixed by adjusting default values for these parameters.
      In this case, we need more client threads than JCA RA sessions (maxSession).

      To avoid resource starvation, number of client threads must be greater than sum of maxSession for each MDB deployed on server.
      We should check number of client threads required by deployments (MDBs), and at least print warning message that size of client thread pool may be insufficient.

      Attachments

        Issue Links

          Activity

            People

              ehugonne1@redhat.com Emmanuel Hugonnet
              mstyk_jira Martin Styk (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated: