Scenario
- 3x1200 = 3600 clients sending requests subsequently to EAP (for details of client request see attached jython script grinder.py)
- requested application is attached undertow-perf-test-app.war with simple jsp containing images
- connection between EAP and clients is http / keep-alive
- EAP default standalone.xml configuration
Transactions per second results
- drop 24% compared to EAP 6.4.7 with HTTP/1.1 connector protocol
- drop 8% compared to EAP 6.4.7 with Http11NioProtocol connector protocol
- for more details see attached reports
HW/JDK
- JDK: Oracle 1.8.0_71
- 4 same machines (3 for clients and 1 for server) with following lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 60
Model name: Intel(R) Xeon(R) CPU E3-1231 v3 @ 3.40GHz
Stepping: 3
CPU MHz: 3723.132
BogoMIPS: 6784.24
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
NUMA node0 CPU(s): 0-7
Load framework used
- Grinder http://grinder.sourceforge.net
- to reproduce on equivalent HW you can use Grinder 3.11 with attached grinder.py script, just fix hardcoded server address
Reports generated with Grinder Analyzer
- http://track.sourceforge.net
- requires jython 2.5.2
- clones
-
JBEAP-4425 [GSS](7.2.z) EAP 7 HTTP keep-alive performance drop compared to EAP 6.4 CP7 for default standalone.xml configuration and high number of clients
- Open
- incorporates
-
JBEAP-4425 [GSS](7.2.z) EAP 7 HTTP keep-alive performance drop compared to EAP 6.4 CP7 for default standalone.xml configuration and high number of clients
- Open