-
Bug
-
Resolution: Done
-
Critical
-
None
-
OSSM 2.0.8, OSSM 2.1.0, OSSM 2.1.1
-
False
-
None
-
False
-
devel_ack, pm_ack, qa_ack
-
Compatibility/Configuration, User Experience
Issue Description
- Service mesh Grafana dashboards show "No data" in several panels referring to "container_" metrics
- The customer expects any solution or workaround for scrapping metrics and normalizing the dashboard panels
Expected Behavior
- Every panel in Grafana shows data
- Grafana panels referring to "container_" metrics in targets show data in the graph
How to reproduce this problem
1. Check the route of grafana in OSSM namespace $oc get route/grafana -n istio-system 2. Login grafana WEB UI https://grafana-istio-system.apps.ocp4.example.com 3. Check No Data panels that refer to "container_" metrics in targets istio > Istio Control Plane Dashboard - Disk panel / CPU panel istio > Istio Performance Dashboard - Most of panels refer "container_" metric in targets 4. Check the prometheus configmap $oc get smcp NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady ["default"] 2.1.1 23m $oc get cm -n istio-system NAME DATA AGE ior-leader 0 22m istio-basic 2 23m istio-ca-root-cert 1 22m istio-grafana 3 21m istio-grafana-configuration-dashboards-istio-extension-dashboard 1 21m istio-grafana-configuration-dashboards-istio-mesh-dashboard 1 21m istio-grafana-configuration-dashboards-istio-performance-dashboard 1 21m istio-grafana-configuration-dashboards-istio-service-dashboard 1 21m istio-grafana-configuration-dashboards-istio-workload-dashboard 1 21m istio-grafana-configuration-dashboards-pilot-dashboard 1 21m istio-namespace-controller-election 0 22m istio-sidecar-injector-basic 2 23m jaeger-sampling-configuration 1 21m jaeger-service-ca 1 21m jaeger-ui-configuration 1 21m kiali 1 19m kiali-cabundle 1 19m kube-root-ca.crt 1 25m openshift-service-ca.crt 1 25m prometheus 1 22m servicemesh-federation 0 22m trusted-ca-bundle 1 22m $oc describe cm/prometheus // It doesn't scrap Kubelet cAdvisor. Thus some panels referring cAdvisor metrics whose names begin with 'container_' have shown as No Data... # Scrape config for Kubelet cAdvisor. # # This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics # (those whose names begin with 'container_') have been removed from the # Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to # retrieve those metrics. # # In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor # HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics" # in that case (and ensure cAdvisor's HTTP server hasn't been disabled with # the --cadvisor-port=0 Kubelet flag). # # This job is not necessary and should be removed in Kubernetes 1.6 and # earlier versions, or it will cause the metrics to be scraped twice. # config removed