-
Bug
-
Resolution: Done
-
Critical
-
Logging 5.2
-
False
-
False
-
NEW
-
NEW
-
Before this update, restarting the Fluentd collector while it was scraping logs from many containers delayed the initialization of the metrics server endpoint. This update resolves the issue by delaying a refresh of the file list.
-
-
Logging (Core) - Sprint 205
Description of problem:
Fluentd target is showing down in Prometheus UI.error as
"Get \"https://10.x.x.x:24231/metrics\": dial tcp 10.x.x.x:24231: connect: connection refused"
"Get \"https://10.x.x.x:24231/metrics\": dial tcp 10.x.x.x:24231: connect: connection refused"
"Get \"https://10.x.x.x:24231/metrics\": dial tcp 10.x.x.x:24231: connect: connection refused"
"Get \"https://10.x.x.x:24231/metrics\": dial tcp 10.x.x.x:24231: connect: connection refused"
"Get \"https://10.x.x.x:24231/metrics\": dial tcp 10.x.x.x:24231: connect: connection refused"
"Get \"https://10.x.x.x:24231/metrics\": dial tcp 10.x.x.x:24231: connect: connection refused"
fluentd pod are running
fluentd-2dbd7 1/1 Running 0
fluentd-8rtxj 1/1 Running 0
fluentd-bn9s5 1/1 Running 0
fluentd-fxhsd 1/1 Running 0
fluentd-kjn29 1/1 Running 0
fluentd-x8cz6 1/1 Running 0
Version-Release number of selected component (if applicable):
RHOCP 4.6
How reproducible:
n/a
Actual results:
Fluentd metric not available and alert are firing as the target is down
Expected results:
Fluentd pod metric should able to scrape by Prometheus.
Additional info:
- is cloned by
-
LOG-1685 [release-5.1] Fluentd pod metric not able to scrape , Fluentd target down
- Closed
- links to