-
Bug
-
Resolution: Done
-
Minor
-
5.0.0.Final
-
None
-
False
-
False
-
Undefined
-
-
I recently finished troubleshooting a unidirectional throughput bottleneck involving a JGroups application (Infinispan) communicating over a high-latency (~45 milliseconds) TCP connection.
The root cause was JGroups improperly configuring the receive/send buffers on the listening socket. According to the tcp(7) man page:
On individual connections, the socket buffer size must be set prior to the listen(2) or connect(2) calls in order to have it take effect.
However, JGroups does not set the buffer size on the listening side until after accept().
The result is poor throughput when sending data from client (connecting side) to server (listening side.) Because the issue is a too-small TCP receive window, throughput is ultimately latency-bound.
OK, so setting SO_RCVBUF works now, good to know. JGroups also sets SO_SNDBUF on sockets, but always before calling connect (client side). Also, setting SO_SNDBUF on a socket received as result of calling accept() apparently works. Besides, as discussed, there is no way of setting this on a ServerSocket in Java...
Thanks for your detailed analysis; always great to work with experts in the field!
Note to self: I should make myself familiar with the Linux networking code, all the more since I have kernel guys in my company that I can ask for advice!
Cheers,