Uploaded image for project: 'JGroups'
  1. JGroups
  2. JGRP-1973

FRAG2: message corruption when thread pools are disabled



    • Bug
    • Status: Resolved (View Workflow)
    • Major
    • Resolution: Done
    • 3.6.6
    • 3.6.7
    • None
    • Hide

      Enable the thread pools (default).

      Enable the thread pools (default).


      When disabling the thread pools (regular, OOB) and using UDP, fragments of a message get corrupted as a single buffer (UDP.receive_buf) is reused.

      • If we send a message of 1000 bytes, and FRAG2.frag_size is set to 600, then FRAG2 sends 2 fragments: f1 (offset=0, length=600) and f2 (offset=600, length=400).
      • f1 is received and placed into receive_buf, then sent up the stack without copying as the DirectExecutor thread pool doesn't copy the data
      • f1 is received by FRAG2 and added to the fragments list at index 0. The buffer of the message points to receive_buf
      • f2 is received and overwrites f1 in receive_buf !
      • f2 is received by FRAG2 and added to the fragments list at index 1. The buffer of the message points to receive_buf
      • FRAG2 now creates a new message whose buffer is receive_buf[0-600] and receive_buf[600-1000].
      • The problem here is that receive_buf contains only f2, which overwrote f1, so the resulting message will be incorrect !

      This probably affects FRAG, too.

      Not too critical, as thread pools are enabled by default, and disabling them might even be removed in the future.

      SOLUTION: remove the check for DirectExecutor and copy the data if copy_buffer is true

      if(!copy_buffer || pool instanceof DirectExecutor)
          pool.execute(new MyHandler(sender, data, offset, length)); // we don't make a copy if we execute on this thread
      else {
          byte[] tmp=new byte[length];
          System.arraycopy(data, offset, tmp, 0, length);
          pool.execute(new MyHandler(sender, tmp, 0, tmp.length));




            rhn-engineering-bban Bela Ban
            rhn-engineering-bban Bela Ban
            0 Vote for this issue
            2 Start watching this issue