-
Epic
-
Resolution: Done
-
Major
-
None
-
None
-
qe-performance-scalability
-
False
-
None
-
False
-
Not Selected
-
In Progress
-
0% To Do, 0% In Progress, 100% Done
This should be tested on the Dev Preview release.
Goals
- Support at least 20K NetFlows per second
- Ensure all UI response times are reasonable
- Document behavior when system is overwhelmed
- Take measurements (see below)
- Come up with general formula and rough calculation on resources needed for various scenarios
Hardware and Software Consideration
These are the different attributes to consider.
- CPU: 1 core, 2 cores, 4 cores, 8 cores
- Memory: 0.5 GB, 1 GB, 2 GB, 4 GB, 8 GB, 16 GB
- Storage: 20 GB, 200 GB, 500 GB, 1 TB, 10 TB
- Bandwidth: 10 GBps, 100 GBps
- Sampling rate: 400:1, 100:1, 1:1 (no sampling)
- Nodes: 7 (25th percentile), 9 (median), 126 (75th percentile)
- Type of traffic: breakdown of web (80/443), database, video, mail
Testbed and Measurements
- (Typical) 2 cores, 1 GB RAM, 200 GB storage, 10 GBps bandwidth, 9 nodes
- 100:1 sampling, medium traffic
- 400:1 sampling, medium traffic
- 1:1 sampling, low traffic
- (Min) 1 core, 0.5 GB RAM, 20 GB storage, 10 GBps bandwidth, 7 nodes
- 100:1 sampling, low traffic
- 400:1 sampling, medium traffic
- (Max) 8 cores, 16 GB RAM, 1 TB storage, 100 GBps bandwidth, 50 nodes
- 100:1 sampling, medium traffic
- 400:1 sampling, high traffic
For each scenario, measure the following:
- Maximum NetFlows per second
- How many days/hours of storage before it reaches capacity?
Other questions:
- What resources are needed to reach 20K NetFlows per second? 50K NetFlows per second?
- How much storage is used for 1K NetFlows per second? Is storage usage linear?
- is related to
-
NETOBSERV-394 Global performance - 4.12
- Closed
- links to
1.
|
Docs Tracker | Closed | Unassigned | ||
2.
|
QE Tracker | Closed | Unassigned |