-
Bug
-
Resolution: Done
-
Normal
-
Logging 5.6.z, Logging 5.4.z, Logging 5.5.z
-
False
-
None
-
False
-
NEW
-
VERIFIED
-
Before this change, there were no alerts implemented to support the collector implementation of vector. This change adds vector alerts and deploys separate alerts depending upon the chosen collector implementation.
-
Log Collection - Sprint 226, Log Collection - Sprint 227
#oc get prometheusrules.monitoring.coreos.com collector -o json |tee fluentd.collector.json |jq '.spec.groups[]|select(.name=="logging_fluentd.alerts")|.rules[].alert' "FluentdNodeDown" "FluentdQueueLengthIncreasing" "FluentDHighErrorRate" "FluentDVeryHighErrorRate"
Steps to Reproduce:
Always
Step to reproduce :
1) Forward logs to ES via vector
apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging annotations: logging.openshift.io/preview-vector-collector: "enabled" spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 1 resources: limits: memory: 2Gi requests: cpu: 200m memory: 2Gi storage: {} redundancyPolicy: "ZeroRedundancy" visualization: type: "kibana" kibana: replicas: 1 collection: type: "vector"
2) check the alert name
oc get prometheusrules.monitoring.coreos.com collector -o json |jq '.spec.groups[]|select(.name=="logging_fluentd.alerts")|.rules[].alert'
- links to
- mentioned on