-
Bug
-
Resolution: Unresolved
-
Minor
-
None
-
4.17.z
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
Low
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
checked in 4.17.0-0.nightly-2024-12-11-010531, see picture: https://drive.google.com/file/d/1yJdQkL2VcFQdsgyhYfS47v69q37yq7zm/view?usp=drive_link, did not see prometheus=openshift-monitoring/k8s label in 4.17 admin console alert details page, see PR: PR: https://github.com/openshift/monitoring-plugin/pull/53
could find prometheus=openshift-monitoring/k8s label in thanos/alertmanager API
$ token=`oc create token prometheus-k8s -n openshift-monitoring` $ oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?' --data-urlencode 'query=ALERTS{alertname="Watchdog"}' | jq { "status": "success", "data": { "resultType": "vector", "result": [ { "metric": { "__name__": "ALERTS", "alertname": "Watchdog", "alertstate": "firing", "namespace": "openshift-monitoring", "prometheus": "openshift-monitoring/k8s", "severity": "none" }, "value": [ 1733972986.595, "1" ] } ], "analysis": {} } } $ oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" 'https://alertmanager-main.openshift-monitoring.svc:9094/api/v2/alerts?&filter={alertname="Watchdog"}' | jq { "annotations": { "description": "This is an alert meant to ensure that the entire alerting pipeline is functional.\nThis alert is always firing, therefore it should always be firing in Alertmanager\nand always fire against a receiver. There are integrations with various notification\nmechanisms that send a notification when this alert is not firing. For example the\n\"DeadMansSnitch\" integration in PagerDuty.\n", "summary": "An alert that should always be firing to certify that Alertmanager is working properly." }, "endsAt": "2024-12-12T09:19:41.073Z", "fingerprint": "6934731368443c07", "receivers": [ { "name": "Watchdog" } ], "startsAt": "2024-12-12T00:05:11.073Z", "status": { "inhibitedBy": [], "silencedBy": [], "state": "active" }, "updatedAt": "2024-12-12T09:15:41.075Z", "generatorURL": "https://console-openshift-console.apps.anli41712.qe.devcluster.openshift.com/monitoring/graph?g0.expr=vector%281%29&g0.tab=1", "labels": { "alertname": "Watchdog", "namespace": "openshift-monitoring", "openshift_io_alert_source": "platform", "prometheus": "openshift-monitoring/k8s", "severity": "none" } } ]
no such issue with other versions
see blow version on admin console of Watchdog alert details page
4.19
namespace=openshift-monitoring prometheus=openshift-monitoring/k8s severity=none alertname=Watchdog
4.18
namespace=openshift-monitoring prometheus=openshift-monitoring/k8s severity=none alertname=Watchdog
4.17
alertname=Watchdog namespace=openshift-monitoring severity=none
4.16
namespace=openshift-monitoring prometheus=openshift-monitoring/k8s severity=none alertname=Watchdog
Version-Release number of selected component (if applicable):
only for 4.17
How reproducible:
always for 4.17
Steps to Reproduce:
1. see the descriptions
Actual results:
did not see prometheus=openshift-monitoring/k8s label in 4.17 admin console alert details page
Expected results:
should see the label
Additional info: