Uploaded image for project: 'JGroups'
  1. JGroups
  2. JGRP-415

Asynchronous dispatching of messages in Multiplexer

XMLWordPrintable

    • Icon: Feature Request Feature Request
    • Resolution: Done
    • Icon: Critical Critical
    • 2.5
    • None
    • None

      With the multiplexer, it is fairly straightforward to have a situation where a problem one service is having handling a message prevents receipt by other services of messages from the problem message's sender.

      E.g., 3 servers,

      {A, B, C}

      all running 3 services S1, S2, S3 that share a mux channel. S1 is an instance of JBoss Cache. A.S1 sends a replication message to the cluster. On B, the thread carrying the message blocks waiting to acquire a lock in JBoss Cache. The ordering protocols in B's channel will prevent B.S2 and B.S3 receiving any further messages from A until the lock is acquired on S1 or the attempt times out.

      JGRP-176 could deal with this at the MessageDispatcher/RequestCorrelator level, but a simpler solution is to add asynchronous message handling in Multiplexer. A set of (bounded) queues is maintained in the Multiplexer, one per service. When messages arrive in Multiplexer.up(), the message is added to the queue, and the JGroups up thread returns. Multiplexer maintains a thread pool that reads messages off the queues and passes them up to the mux channel. The use of queues ensures the messages are received in FIFO order at the application level.

      It is still possible that one service could block others, if it's queue is full. We need to determine exactly how to size the queues – i.e. based on number of bytes of queued messages, or based on number of messages. An application could then configure the size of its queue such that the queue shouldn't fill under expected load during any normal events (e.g. a JBC queue should be configured not to fill during the normal lock acquisition timeout.)

              rhn-engineering-bban Bela Ban
              bstansbe@redhat.com Brian Stansberry
              Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

                Created:
                Updated:
                Resolved: