Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-2732

Prometheus Operator pod throws 'skipping servicemonitor' error on Jaeger integration

XMLWordPrintable

    • False
    • None
    • False
    • NEW
    • VERIFIED
    • Hide
      Before this update, the Prometheus Operator skipped Elasticsearch Monitor for Jaeger setup caused missing metrics observability for Jaeger Elasticsearch instances. With this update, the Elasticsearch Operator reconciles proper ServiceMonitor Resources using the SafeTLSConfig resolves the issue of missing metrics for for Jaeger Elasticsearch instances.
      Show
      Before this update, the Prometheus Operator skipped Elasticsearch Monitor for Jaeger setup caused missing metrics observability for Jaeger Elasticsearch instances. With this update, the Elasticsearch Operator reconciles proper ServiceMonitor Resources using the SafeTLSConfig resolves the issue of missing metrics for for Jaeger Elasticsearch instances.
    • Logging (LogExp) - Sprint 220, Log Storage - Sprint 221, Log Storage - Sprint 222

      This bug is a followup on LOG-2696 for Logging 5.4.

      Error on Prometheus Operator pod

      [kbharti@cube ~]$ oc logs prometheus-operator-c7bdc5c48-dhnx2 | tail -n 5
      
      level=warn ts=2022-06-15T13:13:51.759726324Z caller=operator.go:1832 component=prometheusoperator msg="skipping servicemonitor" error="it accesses file system via bearer token file which Prometheus specification prohibits" servicemonitor=jk-test/monitor-elasticsearch-cluster namespace=openshift-user-workload-monitoring prometheus=user-workload
      level=info ts=2022-06-15T13:13:51.879904029Z caller=operator.go:643 component=thanosoperator key=openshift-user-workload-monitoring/user-workload msg="sync thanos-ruler"
      level=info ts=2022-06-15T13:13:51.87999761Z caller=operator.go:1220 component=prometheusoperator key=openshift-user-workload-monitoring/user-workload msg="sync prometheus"
      level=warn ts=2022-06-15T13:13:51.884746471Z caller=operator.go:1832 component=prometheusoperator msg="skipping servicemonitor" error="it accesses file system via bearer token file which Prometheus specification prohibits" servicemonitor=jk-test/monitor-elasticsearch-cluster namespace=openshift-user-workload-monitoring prometheus=user-workload
      level=info ts=2022-06-15T13:13:51.934327802Z caller=operator.go:643 component=thanosoperator key=openshift-user-workload-monitoring/user-workload msg="sync thanos-ruler"
      

      Seems the labels match between service monitor and elasticsearch-metrics service but PrometheusOperatorRejectedResources Alert is being fired.

      Attached the servicemonitor and elasticsearch-metrics manifests.

      Steps to reproduce:

      1. Enable user-workload-monitoring
      2. Create a Jaeger CR as attached
      3. oc get servicemonitor -o yaml
      4. Observe if the matchLabels are correct
      5. Observe the prometheus-operator user-workload-monitoring pod and error of level=warn ts=<timestamp> caller=operator.go:1832 component=prometheusoperatormsg="skipping servicemonitor" error"it accesses file system via bearer token file which Prometheus specification prohibits" servicemonitor=<namespace>/<name> namespace=openshift-user-workload-monitoring prometheus=user-workload

      CSV:

      [kbharti@cube ~]$ oc get csv
      NAME                           DISPLAY                                          VERSION   REPLACES   PHASE
      elasticsearch-operator.5.4.2   OpenShift Elasticsearch Operator                 5.4.2                Succeeded
      jaeger-operator.v1.30.2        Red Hat OpenShift distributed tracing platform   1.30.2               Succeeded
      

       

       

        1. elasticsearch-metrics-svc.yaml
          1.0 kB
          Kabir Bharti
        2. jaeger.yaml
          0.8 kB
          Kabir Bharti
        3. servicemonitor.yaml
          2 kB
          Kabir Bharti

              ptsiraki@redhat.com Periklis Tsirakidis
              rhn-support-kbharti Kabir Bharti
              Kabir Bharti Kabir Bharti
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: