-
Bug
-
Resolution: Done-Errata
-
Normal
-
Logging 5.8.5
-
False
-
None
-
False
-
NEW
-
NEW
-
Before this update, an issue in the metrics collection code of Cluster Logging Operator caused stale telemetry metrics to be reported. With this update, a rewrite of the metrics code resolves the issue.
-
Bug Fix
-
-
-
Log Storage - Sprint 252
-
Moderate
Description of problem:
The telemetry instrumentation of Cluster Logging Operator produces stale metrics containing previous and intermediate configuration. For example, creating a ClusterLogging configuration referencing the default LokiStack log storage produces an "empty" set of metrics (all options set to zero) and the "current" set of metrics:
# Example for log_forwarder_output_info metric: log_forwarder_output_info{azureMonitor="0", cloudwatch="0", default="0", elasticsearch="0", fluentdForward="0", googleCloudLogging="0", http="0", kafka="0", loki="0", splunk="0", syslog="0"} 0 log_forwarder_output_info{azureMonitor="0", cloudwatch="0", default="0", elasticsearch="0", fluentdForward="0", googleCloudLogging="0", http="0", kafka="0", loki="1", splunk="0", syslog="0"} 1
Similarly, switching from ElasticSearch to Loki leads to the old ElasticSearch metrics still being present in the output (with a value of 0).
Version-Release number of selected component (if applicable):
5.9.0
Steps to Reproduce:
- Install Cluster Logging Operator
- Configure a ClusterLogging resource
- Look at the metrics from the operator (prefixed with "log_")
Actual results:
Metric set contains stale metrics.
Expected results:
Metrics should reflect the current state of the configuration only.
- clones
-
LOG-5426 [release-5.9] Cluster Logging Operator is producing stale telemetry metrics
- Closed
- is related to
-
LOG-5131 OpenShift Logging Telemetry
- Code Review
- links to
-
RHSA-2024:131445 security update Logging for Red Hat OpenShift - 5.8.7
- mentioned on