-
Story
-
Resolution: Done
-
Normal
-
None
-
None
-
BU Product Work
-
3
-
False
-
None
-
False
-
OCPSTRAT-156 - Netobserv operator: Make configuration simpler
-
-
-
NetObserv - Sprint 235, NetObserv - Sprint 236, NetObserv - Sprint 237, NetObserv - Sprint 238, NetObserv - Sprint 239, NetObserv - Sprint 240
During perfscale tests, I've hit several instances where I hit per_stream_rate_limit in loki, which is not configurable currently with LokiStack CRD.
time=2022-11-11T16:23:45Z level=info component=client error=server returned HTTP status 429 Too Many Requests (429): entry with timestamp 2022-11-11 16:23:40.572 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream '{DstK8S_Namespace="openshift-dns", DstK8S_OwnerName="dns-default", FlowDirection="1", SrcK8S_Namespace="netobserv", SrcK8S_OwnerName="flowlogs-pipeline-transformer", app="netobserv-flowcollector"}' totaling 660B, consider splitting a stream via additional labels or contact your Loki administrator to see if the limit can be increased' for stream: {DstK8S_Namespace="openshift-dns", DstK8S_OwnerName="dns-default", FlowDirection="1", SrcK8S_Namespace="netobserv", SrcK8S_OwnerName="flowlogs-pipeline-transformer", app="netobserv-flowcollector"}, fields.level=warn fields.msg=error sending batch, will retry host=lokistack-distributor-http.openshift-operators-redhat.svc:3100 module=export/loki status=429
There may be some investigation that needs to be performed to figure what are scenarios we could hit these limits, see discussion: https://coreos.slack.com/archives/CB3HXM2QK/p1668193666267879?thread_ts=1668118579.602619&cid=CB3HXM2QK
======
This story will cover both:
- per_stream_rate_limit
- per_stream_rate_limit_burst
- is related to
-
NETOBSERV-975 Flows dropped due to Loki stream limit during large traffic spikes
- Closed
- links to
- mentioned on
(2 links to, 1 mentioned on)