-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
4.16, 4.17, 4.18, 4.19, 4.20
-
None
-
None
-
False
-
-
None
-
Important
-
None
-
All
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
After disabling the local AlertManager as described in the documentation (https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/monitoring/configuring-core-platform-monitoring), the PrometheusNotConnectedToAlertmanagers alert fires in the cluster and there are still DNS requests failing like this one: alertmanager-main.openshift-monitoring.svc.openshift-insights.svc.cluster.local alertmanager-main.openshift-monitoring.svc.svc.cluster.local It seems that insights still have some AlertManager stuff in the configuration. oc logs -n openshift-insights insights-operator-7ccd69f68c-frnvf | grep alertmanager Recording config/pdbs/openshift-monitoring/alertmanager-main alert "AlertmanagerReceiversNotConfigured" has state "pending"
Version-Release number of selected component (if applicable):
4.16 - 4.20
How reproducible:
100%
Steps to Reproduce:
1. Disable AlertManager
2. Check that the PrometheusNotConnectedToAlertmanagers
3. Enable trace or debug Level in dns-pods
4. Check dns pod logs. There will be request like this:
alertmanager-main.openshift-monitoring.svc.openshift-insights.svc.cluster.local
alertmanager-main.openshift-monitoring.svc.svc.cluster.local
Actual results:
The PrometheusNotConnectedToAlertmanagers alert fires. There will be DNS request failing.
Expected results:
When disabling AlertManager, there should not be AlertManager DNS requests or alerts.
Additional info: