-
Bug
-
Resolution: Done
-
Major
-
Logging 5.6.z
-
False
-
-
False
-
NEW
-
VERIFIED
-
Before this update splunk and google cloud logging was not included in the log_forwarder_output_info metric. This update resolves the issue.
-
-
-
Log Collection - Sprint 235, Log Collection - Sprint 236
Description of problem:
log_forwarder_output_info:
{
"metric": {
"__name__": "log_forwarder_output_info",
"cloudwatch": "0",
"default": "0",
"elasticsearch": "0",
"endpoint": "http-metrics",
"fluentdForward": "0",
"instance": "10.128.2.18:8686",
"job": "cluster-logging-operator-metrics",
"kafka": "0",
"loki": "0",
"namespace": "openshift-logging",
"pod":"cluster-logging-operator-558fbdd69c-swk9p",
"service": "cluster-logging-operator-metrics",
"syslog": "0"
},
"value": [
1680831975.118,
"1"
]
}
Steps to Reproduce:
- Check the log_forwarder_output_info
token=$(oc create token prometheus-k8s -n openshift-monitoring)
pod=$(oc get pods -o name |head -1)
route=$(oc get route -n openshift-monitoring prometheus-k8s -ojsonpath={.spec.host})
oc exec $pod -- curl -k -s -H "Authorization: Bearer $token" "https://${route}/api/v1/query?" --data-urlencode "query=log_forwarder_output_info" | jq '.data.result'
Expected Result:
Metrics include items Splunk and Google Cloud logging
"metric": { "__name__": "log_forwarder_output_info", "cloudwatch": "0", "default": "0", "elasticsearch": "0", "fluentdForward": "0", "syslog": "0", "kafka": "0", "loki": "0", "splunk": "0", "gcp logging": "0", "namespace": "openshift-logging", "pod":"cluster-logging-operator-558fbdd69c-swk9p", "endpoint": "http-metrics", "service": "cluster-logging-operator-metrics", "instance": "10.128.2.18:8686", "job": "cluster-logging-operator-metrics" }