Uploaded image for project: 'Network Observability'
  1. Network Observability
  2. NETOBSERV-391

Metrics & prometheus setup - flow based dashboards and metrics


    • netobserv-metrics-setup
    • False
    • None
    • False
    • OCPSTRAT-492Dashboard: Provide an overview of your networking state [Flow based ]
    • Not Selected
    • To Do
    • OCPSTRAT-492 - Dashboard: Provide an overview of your networking state [Flow based ]
    • 100
    • 100% 100%
    • M


      We should provide and document a default set of metrics, even if not consumed in the dashboards, to cover potential user needs with low effort. It's also going to be needed for dashboards consumption (NETOBSERV-139).

      This epic covers:

      • From the operator, configuring FLP to provide a set of metrics (we can start from there and refine: https://github.com/netobserv/flowlogs-pipeline/blob/main/contrib/kubernetes/flowlogs-pipeline.conf.yaml#L171 ; we need to take care about cardinality, ie. not index all ips/pods for instance
      • Document this set of metrics
      • From the operator, create the needed ServiceMonitor resources (including for the console plugin, which also expose internal metrics)
      • Provide some guidance to deploy a Prometheus instance (similar to our loki-zero-click install), or to configure cluster prometheus to collect third-party metrics.
      • Provide an automated installation of the prometheus operator when needed/desired, as in the Dependent Operators PoC NETOBSERV-219

      We may have three levels of config regarding metrics collection:

      1. Turn on metrics collection (internal metrics + flow metrics)
      2. Turn on minimal set of metrics (only the flow metrics that are/will be used in dashboards)
      3. Turn off all metrics

      Default should be 1.


        Issue Links



              jtakvori Joel Takvorian
              jtakvori Joel Takvorian
              Mehul Modi Mehul Modi
              Sara Thomas Sara Thomas
              0 Vote for this issue
              6 Start watching this issue