-
Bug
-
Resolution: Done
-
Minor
-
netobserv-1.2
-
False
-
None
-
False
-
-
-
NetObserv - Sprint 236, NetObserv - Sprint 237, NetObserv - Sprint 238, NetObserv - Sprint 239
-
Low
We've observed over several rounds of testing NetObserv on our Baremetal cluster where one or more eBPF pods with be OOMKilled and go into CrashLoopBackOff state: https://docs.google.com/document/d/1DOfV17DEuqI0YSW6oOLc_XQlze-ZFS5FtqYf39n179E/edit?usp=sharing
This can be mitigated by increasing the eBPF memory limit from the default 800Mi - we've seen success with a limit of 2000Mi though this is likely tied to the amount of traffic we are generating.
Marking this as low severity as it doesn't seem to result in dropped flows, but we should either increase the default memory limit for eBPF or try and find a way to balance this load across other eBPF pods.
- split from
-
NETOBSERV-902 QE: Run performance tests for 1.2 release
- Closed
- links to