-
Bug
-
Resolution: Done
-
Major
-
None
-
Logging 5.7.7, Logging 5.8.0
-
False
-
None
-
False
-
NEW
-
NEW
-
-
-
-
Log Storage - Sprint 249, Log Storage - Sprint 250, Log Storage - Sprint 251, Log Storage - Sprint 252, Log Storage - Sprint 253, Log Storage - Sprint 254, Log Storage - Sprint 255, Log Storage - Sprint 256
Description of problem:
Ingester pods fails to start logging-loki-ingester-0 0/1 Running 0 3d
Version-Release number of selected component (if applicable):
Logging 5.7/5.8
How reproducible:
Steps to Reproduce:
1.Install/configure the Loki operator on an OpenShift cluster. 2.Drain the node upon which the ingester pod runs, thus causing the pod to be restarted on some other node 3.Notice the ingester pod is now stuck in an infinate CPU-bound loop. And the following messages can be seen endlessly repeating every 60 seconds in the ingester pod logs;
Actual results:
Ingester pod fails to start
Expected results:
Pod to start without problem
Additional info:
Upstream bug https://github.com/grafana/loki/issues/10988
Clearing wal fixes the problem
- is related to
-
LOG-5614 [release-5.9] Replay Memory Ceiling not set when 1x.demo size is used
- Closed
-
LOG-5615 [release-5.8] Replay Memory Ceiling not set when 1x.demo size is used
- Closed
-
LOG-5616 [release-5.7] Replay Memory Ceiling not set when 1x.extra-small size is used
- Closed
-
LOG-5617 [release-5.6] Replay Memory Ceiling not set when 1x.extra-small size is used
- Closed
- links to