-
Bug
-
Resolution: Done
-
Major
-
None
-
4.12
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
Moderate
-
None
-
None
-
Rejected
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
Not all monitoring components configure Prometheus to use mTLS for accessing their /metrics endpoint. Some continue using bearer token authentication (for instance openshift-state-metrics).
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Check the Prometheus configuration 2. 3.
Actual results:
Scrape configuration for the following monitoring components don't configure mTLS for scraping metrics:
* openshift-state-metrics
* thanos-ruler (when UWM is enabled)
Scrape configuration looks like (note that there's no cert_file & key_file):
authorization:
type: Bearer
credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
ca_file: /etc/prometheus/configmaps/serving-certs-ca-bundle/service-ca.crt
server_name: cluster-monitoring-operator.openshift-monitoring.svc
insecure_skip_verify: false
Expected results:
scrape configurations use mTLS for authentication like this:
tls_config:
ca_file: /etc/prometheus/configmaps/serving-certs-ca-bundle/service-ca.crt
cert_file: /etc/prometheus/secrets/metrics-client-certs/tls.crt
key_file: /etc/prometheus/secrets/metrics-client-certs/tls.key
server_name: alertmanager-main.openshift-monitoring.svc
insecure_skip_verify: false
Additional info:
cluster-monitoring-operator still uses bearer token for authentication because it's managed by CVO and we have no easy way to inject the client CA into the cluster-monitoring-operator deployment.